back to article Oi! Not encrypting RPC traffic? IETF bods would like to change that

An Internet Engineering Task Force group has turned its attention to how Remote Procedure Calls (RPC) travel over the internet, and decided a bit of (easy) encryption is in order. RPC hasn't been updated in more than a decade, and while an attempt was made to bestow encryption upon it in 2016 (in RFC 7861, RPCSEC), take-up is …

  1. JimmyPage Silver badge
    Boffin

    Was RPB ever meant to be exposed over a public network ?

    An awful lot of these "vulnerabilities" have arisen because protocols were pushed outside their intended audience. And like in the real world, people continue using the "not for commercial use" products until they break, we're seeing the same with protocols.

    The bottom line is when the internet was developed, the idea of man+dog accessing it was only (bad) science fiction. Added to which encryption tech was primitive and inefficient.

    If you burrow into any original protocol, you'll inevitably find vulnerabilities.

    The real task is making the next generation backward compatible. And that, my friends, is where the next generation of vulnerabilities will come.

    1. Spazturtle Silver badge

      Re: Was RPB ever meant to be exposed over a public network ?

      This vulnerability can be prevented by using a VPN between your sites. This issue only exists in incorrectly configured networks.

      1. Michael Wojcik Silver badge

        Re: Was RPB ever meant to be exposed over a public network ?

        This vulnerability can be prevented by using a VPN between your sites.

        Until you have an attacker in the network, in a position to monitor or interpose traffic, but not yet with comprehensive elevated privileges. Then RPC becomes a fine way to pivot and escalate.

        The egg network-security model (hard perimeter, soft inside) lacks defense in depth, as many organizations have learned to their sorrow.

    2. Anonymous Coward
      Anonymous Coward

      Re: Was RPB ever meant to be exposed over a public network ?

      I can see that I'm not the only one with that concern, then. I'm also more than a little concerned that any security failure in TLS itself is going to open a yawning canyon in the overall security for all computing. I've no idea if/when that might happen, but we've been surprised that way before.

      1. Michael Wojcik Silver badge

        Re: Was RPB ever meant to be exposed over a public network ?

        any security failure in TLS itself is going to open a yawning canyon in the overall security for all computing. I've no idea if/when that might happen, but we've been surprised that way before.

        Er ... there have been many failures in TLS, both in the protocol and in the implementation. CRIME, BREACH, Lucky13, Logjam; MD5, RC4, and RSA weaknesses; Heartbleed and goto fail... I could go on.1 Every SSL/TLS protocol version prior to TLSv1.2 has serious published vulnerabilities. Many of the suites still available in 1.2 have major issues. Many applications continue to use known-broken implementations. Many applications that use implementations without known severe or critical bugs do so incorrectly.

        And then there's the ongoing complete fucking disaster that is PKIX.

        1A couple of years ago I did, in fact, go on at some length on this topic, in a presentation for ISSA. It's probably available on their website somewhere.

  2. Destroy All Monsters Silver badge
    Meh

    A step in the right direction. I guess.

    From Convenience over Corrected to Encrypted Convenience over Correctness.

    RPC will stay as evil crud, but if you must have it... by all means, encrypt.

  3. chasil

    stunnel, wireguard

    I used stunnel in the past to encrypt NFSv4 over TCP. NFS makes use of ONC RPC.

    Wireguard also has a much, much smaller footprint than any TLS implementation, and would likely shield any and all RPC traffic.

    https://www.linuxjournal.com/content/encrypting-nfsv4-stunnel-tls

    1. CheesyTheClown

      Re: stunnel, wireguard

      TLS1.3 is a major change. I'd imagine that with new protocols, we'd use TLS and DTLS 1.3 as opposed to earlier versions.

      Also consider that the performance issues with earlier versions of TLS have been mostly handshake related. This is a short term problem in NFS since NFS is a long term protocol.

      There are some real issues with NFSv4 which make it unsuitable for environments which require distance. It's not nearly as terrible as using a FibreChannel techology, but it can be pretty bad all the same. Most people don't properly prepare their networks for NFSv4. NFS loses so much performance it's barely useable if the MTU on the connection is less than 8500 bytes.

      NFS also has a ridiculously high retry overhead.

      NFS should NEVER EVER EVER EVER be run over TCP... if you ever think that running NFS over TCP is a good idea... stop everything you're doing and read the RFC which explains that TCP support is only there for interoperability. Unless you're using some REALLY REALLY bad software like VMware which seems wholly intent on having poor NFS support (no PNFS support for how long after PNFS came out?) you should run NFS as UDP only.

      There are many reasons for this... the most obvious reason is that TCP is a truly horrible protocol. It's a quick and dirty solution for programmers who don't want to learn how protocols work or understand anything about state machines. UDP is for people who have real work to do. Quic is even better, but that's a little while off.

      I would recommend against using wireGuard.

      - It's doing in kernel what should be done in user space

      - It's two letter variable name hell

      - It's directly modifying sk_buff instead of using helper functions which increases risk over time with kernel updates to security holes being introduced.

      - key exchange is extremely limited

      I won't say I see any real security holes in it, and I will admit it's some of the most cleanly written kernel module code I've seen in a long time. But there's a LOT of complexity in there and it's running in absolutely privileged kernel mode. It looks like a great place to attack a server. One minor unnoticed change to the kernel tree and specifically sk_buff and this thing is a welcome matt to hackers.

      1. chasil

        Re: stunnel, wireguard

        There are also situations where NFS should NEVER EVER EVER be run over UDP. I guess you can save stunnel for those scenarios.

        Isn't there also a userspace implementation of wireguard? Perhaps you would be happier with that version.

        From "man 5 nfs:"

        Using NFS over UDP on high-speed links

        Using NFS over UDP on high-speed links such as Gigabit can cause silent data corruption.

        The problem can be triggered at high loads, and is caused by problems in IP fragment reassembly. NFS read and writes typically transmit UDP packets of 4 Kilobytes or more, which have to be broken up into several fragments in order to be sent over the Ethernet link, which limits packets to 1500 bytes by default. This process happens at the IP network layer and is called fragmentation.

        In order to identify fragments that belong together, IP assigns a 16bit IP ID value to each packet; fragments generated from the same UDP packet will have the same IP ID. The receiving system will collect these fragments and combine them to form the original UDP packet. This process is called reassembly. The default timeout for packet reassembly is 30 seconds; if the network stack does not receive all fragments of a given packet within this interval, it assumes the missing fragment(s) got lost and discards those it already received.

        The problem this creates over high-speed links is that it is possible to send more than 65536 packets within 30 seconds. In fact, with heavy NFS traffic one can observe that the IP IDs repeat after about 5 seconds.

        This has serious effects on reassembly: if one fragment gets lost, another fragment from a different packet but with the same IP ID will arrive within the 30 second timeout, and the network stack will combine these fragments to form a new packet. Most of the time, network layers above IP will detect this mismatched reassembly - in the case of UDP, the UDP checksum, which is a 16 bit checksum over the entire packet payload, will usually not match, and UDP will discard the bad packet.

        However, the UDP checksum is 16 bit only, so there is a chance of 1 in 65536 that it will match even if the packet payload is completely random (which very often isn't the case). If that is the case, silent data corruption will occur.

        This potential should be taken seriously, at least on Gigabit Ethernet. Network speeds of 100Mbit/s should be considered less problematic, because with most traffic patterns IP ID wrap around will take much longer than 30 seconds.

        It is therefore strongly recommended to use NFS over TCP where possible, since TCP does not perform fragmentation.

        Jumbo frames are the top-rated workaround.

        p.s. Olaf Kirch's overview of NFS on Linux says that TCP was always the default.

  4. Michael Wojcik Silver badge

    opportunistic TLS

    The usual problem with opportunistic TLS (client offers to use TLS, e.g. with a STARTTLS message as done in this I-D, and sees if the server will accept it) is that a man-in-the-middle can just reject the offer and force the client to downgrade.

    A quick glance over the I-D suggests that in this case the MITM would either have to intercept the conversation and forge the rejection, or replace the client's authentication message with AUTH_NONE, which should fail safe (i.e. the MITM could just start its own AUTH_NONE session if it wanted). So the downgrade would only work if the MITM sent a rejection, the client downgraded, and the server was configured to accept a non-TLS connection.

    Still, that's far from ideal. If the server is supporting non-TLS for compatibility, a MITM can force any compliant client to not use TLS.

    It'd be good if the protocol had a downgrade-detection mechanism like TLS_FALLBACK_SCSV, but I don't think there's any integrity-protection mechanism available if TLS (or other encryption mechanism like SECRPC) isn't used which can prevent the MITM from removing the downgrade signal.

    But all that said, opportunistic TLS prevents a passive attacker from snooping, so it has some value. And it can help with a phased migration to an always-secured configuration.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like