It feels like odd behavior but logically it makes sense to trust the authenticated info from the SSL certificate.
Personally to resolve all ambiguity I'd just use different certificates. Separation of concerns and all.
jsmith45 698 days ago [-]
But let's say I had two severs. One that hosts 1000 different subdomains, so I give it a wildcard cert, as 1000 SAN entries would be absurd.
The second server hosts one subhuman, and has a specific cert for it.
Would Firefox try to send a request for that seperate subhuman to my big wildcard server? That sounds very wrong!
It is also possible that I had configured that wildcard server to host up some default site for unknown hostnames. Perhaps that default site exists to tell people how to report a problem if they see it, as with 1000 subdomains it would be very easily for there to be an accidental mismatch between the name in DNS and the name in the nginx config, especially if relevant automation has not been built up yet.
I'd see that as an absolute bug, regardless of what the spec says (and the bug report does has some people disputing the spec allows doing this without checking DNS first).
Although I suppose it is possible that Firefox won't do this for wildcards, but only for explicitly listed SAN names? Which is better but still quite odd seeming to me.
xg15 698 days ago [-]
> The second server hosts one subhuman, and has a specific cert for it.
Would Firefox try to send a request for that seperate subhuman to my big wildcard server? That sounds very wrong!
You might want to clear your autocorrect cache.
marcosdumay 698 days ago [-]
It's a problem if you decide to trust a certificate that isn't automatically trusted by the browser.
pornel 698 days ago [-]
The reporter has discovered that HTTP/2 has a connection coalescing feature. Coalescing requires hosts to share the same valid TLS certificate. As far as the specs go, this is not a bug, and it doesn't violate security model of HTTP/2.
HTTPS is meant to add security on top of unprotected untrustworthy network, so it doesn't concern itself with what happens on the IP layer which is already assumed to be potentially MITMed/spoofed. Here the reporter expects the unprotected network to add security in case HTTPS has been completely defeated. Good luck with that.
untitaker_ 698 days ago [-]
As the reporter already stated, just because the HTTP/2 spec permits it, that doesn't make it a good idea. Now Firefox may "resolve" a hostname to a different IP based on whether it might have connected to a different hostname before. I don't want to imagine what bugs this will cause. And for what? To save another (probably already cached) DNS lookup?
pornel 698 days ago [-]
The time to bikeshed whether it's a good idea was at IETF when HTTP/2 has been designed. This feature has been in production for over 6 years.
Note that both hostnames must be in the same TLS certificate. You won't get random hosts coalesced by accident. You have to specifically obtain a TLS certificate that contains multiple of your hosts, and their DNS entries have to have at least one IP address in common. And then the server can still return an HTTP error that tells the browser to stop coalescing and retry with a fresh connection.
In practice this feature is commonly used to reuse a single CDN connection to fetch from multiple hosts behind the same CDN (e.g. www.example.com + assets.example.com), and avoids fragmenting per-connection request prioritization in HTTP/2.
toast0 698 days ago [-]
> You have to specifically obtain a TLS certificate that contains multiple of your hosts, and their DNS entries have to have at least one IP address in common.
But oddly IMHO, the IP used to send requests to both need not be in common?
pornel 698 days ago [-]
Yes, because IPs are hard to keep exactly in sync even when they reach the exact same machine, e.g. because of sharding/load-balancing done by CDNs.
untitaker_ 695 days ago [-]
>The time to bikeshed whether it's a good idea was at IETF when HTTP/2 has been designed.
that's not at all how this works in practice. plenty of bad ideas leave the IETF all the time, and it's up to future standard revisions to follow up with how those are being dealt with in implementations.
zerocrates 698 days ago [-]
The utility of reuse is clear, but how often is it actually necessary to "reuse" a connection to an address that's not in the DNS for the hostname you're trying to connect to, as here? Seems like it would be very rare.
londons_explore 698 days ago [-]
Isn't it fairly common for hundreds of IP's to be in a DNS zone, but only a random subset is returned to the client as a form of basic loadbalancing?
zerocrates 698 days ago [-]
It seems like it must actually do a DNS lookup to have established the matching IPv6 address, so you're not even saving that. (edit: Or maybe it just ignores DNS totally at this point and operates off having seeing the second hostname in the certificate it got before? That seems like it would cause more issues though.)
I'm not sure I totally buy that the spec does allow it (since the "authoritativeness" rules for HTTPS are defined as being in addition to those for HTTP), but beyond that it is a little hard to imagine what purpose there is to be overly-greedy like this in matching up for reused connections.
xg15 698 days ago [-]
Doesn't this have the potential to interact really badly with SNI though?
After all, TLS today is not "just" a security layer anymore. With SNI and ALPN it has also taken on quite a bit of routing functionality.
E.g. suppose example.com has two subdomains, foo.example.com and bar.example.com. To simplify deployment, the site uses a central TLS gateway to terminate all connections to *.example.com and also uses a single wildcard certificate for all subdomains. The gateway then reads the SNI header to figure out which backend server it should proxy the connection to.
This works perfectly fine as long as a browser only uses a connection for the subdomain that it announced in the SNI header. However, if it suddenly starts to make requests for a different subdomain on that connection, because both subdomains share the same IP (of the gateway) and cert (wildcard), requests will be routed to the wrong backend server and things will break.
I'm not sure what the exact conditions for connection coalescing are in Firefox. If it's really as you say and the certificates have to be identical, then it's just a safety issue. However, if it's only required that the certs are signed for the same domain, then it could become a security issue as well: All you needed was to be on the same shared hoster as your victim and obtain a fraudulent cert for their domain and you can hijack some of their connections.
Edit:
Also, if the browser has a connection open to foo.example.com and wants to make a request to bar.example.com, how does it even obtain the cert for bar.example.com to decide if it can reuse foo's connection? It would have to open a new connection with SNI=bar.example.com, grab the cert - and then close the connection again and go back to foo's connection. What would even be gained by that?
devit 698 days ago [-]
As far as I can tell this is only done if the certificate if valid for the new domain, so this seems only exploitable if you have stolen the certificate public keys.
It might allow to do traffic interception after stealing the private keys without MitMing the connection, but in general you have to assume the adversary is all-powerful and can MitM anything and thus that a stolen certificate private key is catastrophic, so it doesn't really change anything in the worst case.
It also appears that it needs an address in common, so it doesn't even seem a problem from a correctness perspective, since if you were fine getting traffic for both domains on the same IPv6 address (and thus the same endpoint), then why would you not support getting it on the same IPv4 address, or viceversa.
Although it's not clear whether it's a good idea, it seems it might be better to require that the domains actually have the address that was connected to in common rather than any address.
jsmith45 698 days ago [-]
I could see this perhaps being exploitable against some cloudflare clone or shared hosting provider where the following setup is used:
One server hosts multiple sites from different end entities, with a shared IPv4 address (since those are scarce), and obviously a cert that contains SANs for all the sites this server instance hosts. On IPv4 the server is configured to look inside the request to know which virtual site to serve up.
But for IPv6, they let the each site have a seperate ip address, and bind the virtual sites based only on IP address, since each one is unique, and we can avoid extra overhead of parsing the domain before dispatching this way.
They continue to reuse the multi-san Cert for these addresses, because having to manage extra certs when you already have one that works for all the sites would be silly.
Now if a connection is open to the attackers site via ipv6, and then Firefox tries to open the victim site, resolves the IP address, notices that it has an IPv4 match with the attackers site, so assumes it can just use the attacker site's IPV6 connection, since it was using a cert that lists the victim domain, and suddenly attacker site starts getting requests intended for the victim site, and can do things like steal credentials.
Now perhaps this scenario cannot happen. The bug report does not fully lay out which domains were involved and which ipv4 were assigned to which, and I'm not sure exactly how the reuse logic in Firefox works.
I'm also unsure if this proposed setup violates some http/2 RFC MUSTs, or anything like that, but it is not like it is a completely outlandish idea.
GauntletWizard 698 days ago [-]
> obviously a cert that contains SANs for all the sites
This is where your example falls apart, because that's not a reasonable thing in the real world. Shared hosts should be (and mostly are) using SNI[1], "Server Name Indication". SNI basically adds "Tell me which certificate you want to see" to the protocol, so your browser starts the request with "I'm trying to connect to Site1", and the shared hosting provider will have certificates for each individual site they're hosting, and pick the right one.
For a real example of how this is dangerout, see jsmith45's comment above about wildcards and specific sites.
[nitpick] Private keys. The public keys are published right inside the certificate of course.
voidwtf 698 days ago [-]
I’m really not comfortable with how they resolved this as invalid.
This seems like it could be ripe for abuse when the host is behind a Cloudflare like service, or a CDN with a shared anycast infrastructure. Often times these services will use the host name in the initial connection to determine the origin. While it would be very difficult to turn this into a targeted attack, I could imagine that spraying a number of discrete domains across those services may result in finding one or more interesting hosts.
This could also cause some quite unexpected behavior if your applications/infrastructure sits behind a common reverse proxy where all hosts share a *.host.tld certificate pointing to the same reverse proxy. Imagine static.host.tld serving you the login page, which also tries to make a request to api.host.tld which shares the same certificate and IP but with the given host name would have been proxied to a different backend server.
tialaramex 698 days ago [-]
> This could also cause some quite unexpected behavior if your applications/infrastructure sits behind a common reverse proxy where all hosts share a *.host.tld certificate pointing to the same reverse proxy. Imagine static.host.tld serving you the login page, which also tries to make a request to api.host.tld which shares the same certificate and IP but with the given host name would have been proxied to a different backend server.
If your static.host.tld cheerfully claims to be api.host.tld and you gave it a certificate testifying to this claim, then it's really your fault when it can't serve API queries right?
Outfits like Cloudflare handle the mapping properly, even if they have arranged a single certificate for this.example, that.example and the-other.example, they care whether a particular TLS session says it's for this.example or that.example and the transactions will accordingly go to the correct place. This Mozilla behaviour doesn't cause any problems.
This is something a crappy bulk host probably gets wrong using Apache httpd since Apache doesn't make even the bare minimum effort to actually implement this correctly by default (When a TLS client says "Hi I want to talk to api.host.example" and there is no such host configured, Apache just figures the default host can handle it even though the standard explicitly says that's wrong...). Fortunately the crappy bulk host is almost certainly just mapping everything through a bunch of VirtualHost rules and you probably can't exploit it in the way you describe these days, although they should probably really use an HTTP server from somebody competent.
voidwtf 698 days ago [-]
I think your missing the part of the bug report where Firefox is REUSING the existing TLS connection, which was established with a completely different SNI.
If I have a load balancer handling all these connections, and I routed a connection through to static-backend-1 then Firefox “cheerfully” decided to reuse this connection for api.host.tld, how is my load balancer which has already handed off the connection to static-backend-1 going to do anything about that?
tialaramex 698 days ago [-]
Mozilla are doing this for HTTP/2 which transports the entire URI, not like HTTP/1.0 where people just figure hey, I needn't send the server's name.
So, the request for api.host.tld says "api.host.tld" on it. If your static server receives this request, but isn't able to service api.host.tld requests the HTTP/2 specification provides an HTTP error code to return 421, saying, oops, I can't help you with that - and the specification tells clients that in this case they might try asking via another route.
xg15 698 days ago [-]
> they care whether a particular TLS session says it's for this.example or that.example and the transactions will accordingly go to the correct place. This Mozilla behaviour doesn't cause any problems.
Indeed they do - via the SNI header, which is set once per TLS connection, not once per HTTP request. I would think initializing a connection for one domain and then reusing it for a different domain could absolutely cause problems.
Matthias247 698 days ago [-]
CDNs with shared infrastructure make use of domain-fronting checks. Those validate that the target of every single request (identified by the Host or :authority header) is valid for the SNI and associated TLS certificate that was used to establish the connection, and otherwise reject the request. That is actually independent of HTTP/1.1, /2 or /3, since all of those allow for multiple requests targetting different authorities on the same connection.
If you build your multi-tenant webserver yourself you however obviously have to be careful regarding getting this right.
nix0n 698 days ago [-]
It looks to me like HTTP/2 can be disabled in Firefox via setting network.http.http2.enabled and network.http.http2.enabled.deps both to false.
I tested on http://www.http2demo.io/ both with and without those settings enabled. The HTTP/2 version doesn't fail when those are disabled, but it shows the same result as the HTTP/1 version.
To me this behavior is an argument against Cloudflare more than it's an argument against HTTP/2, but mostly I hope that nobody is tempted to switch to Chrome about it.
Personally to resolve all ambiguity I'd just use different certificates. Separation of concerns and all.
The second server hosts one subhuman, and has a specific cert for it.
Would Firefox try to send a request for that seperate subhuman to my big wildcard server? That sounds very wrong!
It is also possible that I had configured that wildcard server to host up some default site for unknown hostnames. Perhaps that default site exists to tell people how to report a problem if they see it, as with 1000 subdomains it would be very easily for there to be an accidental mismatch between the name in DNS and the name in the nginx config, especially if relevant automation has not been built up yet.
I'd see that as an absolute bug, regardless of what the spec says (and the bug report does has some people disputing the spec allows doing this without checking DNS first).
Although I suppose it is possible that Firefox won't do this for wildcards, but only for explicitly listed SAN names? Which is better but still quite odd seeming to me.
Would Firefox try to send a request for that seperate subhuman to my big wildcard server? That sounds very wrong!
You might want to clear your autocorrect cache.
HTTPS is meant to add security on top of unprotected untrustworthy network, so it doesn't concern itself with what happens on the IP layer which is already assumed to be potentially MITMed/spoofed. Here the reporter expects the unprotected network to add security in case HTTPS has been completely defeated. Good luck with that.
Note that both hostnames must be in the same TLS certificate. You won't get random hosts coalesced by accident. You have to specifically obtain a TLS certificate that contains multiple of your hosts, and their DNS entries have to have at least one IP address in common. And then the server can still return an HTTP error that tells the browser to stop coalescing and retry with a fresh connection.
In practice this feature is commonly used to reuse a single CDN connection to fetch from multiple hosts behind the same CDN (e.g. www.example.com + assets.example.com), and avoids fragmenting per-connection request prioritization in HTTP/2.
But oddly IMHO, the IP used to send requests to both need not be in common?
that's not at all how this works in practice. plenty of bad ideas leave the IETF all the time, and it's up to future standard revisions to follow up with how those are being dealt with in implementations.
I'm not sure I totally buy that the spec does allow it (since the "authoritativeness" rules for HTTPS are defined as being in addition to those for HTTP), but beyond that it is a little hard to imagine what purpose there is to be overly-greedy like this in matching up for reused connections.
After all, TLS today is not "just" a security layer anymore. With SNI and ALPN it has also taken on quite a bit of routing functionality.
E.g. suppose example.com has two subdomains, foo.example.com and bar.example.com. To simplify deployment, the site uses a central TLS gateway to terminate all connections to *.example.com and also uses a single wildcard certificate for all subdomains. The gateway then reads the SNI header to figure out which backend server it should proxy the connection to.
This works perfectly fine as long as a browser only uses a connection for the subdomain that it announced in the SNI header. However, if it suddenly starts to make requests for a different subdomain on that connection, because both subdomains share the same IP (of the gateway) and cert (wildcard), requests will be routed to the wrong backend server and things will break.
I'm not sure what the exact conditions for connection coalescing are in Firefox. If it's really as you say and the certificates have to be identical, then it's just a safety issue. However, if it's only required that the certs are signed for the same domain, then it could become a security issue as well: All you needed was to be on the same shared hoster as your victim and obtain a fraudulent cert for their domain and you can hijack some of their connections.
Edit:
Also, if the browser has a connection open to foo.example.com and wants to make a request to bar.example.com, how does it even obtain the cert for bar.example.com to decide if it can reuse foo's connection? It would have to open a new connection with SNI=bar.example.com, grab the cert - and then close the connection again and go back to foo's connection. What would even be gained by that?
It might allow to do traffic interception after stealing the private keys without MitMing the connection, but in general you have to assume the adversary is all-powerful and can MitM anything and thus that a stolen certificate private key is catastrophic, so it doesn't really change anything in the worst case.
It also appears that it needs an address in common, so it doesn't even seem a problem from a correctness perspective, since if you were fine getting traffic for both domains on the same IPv6 address (and thus the same endpoint), then why would you not support getting it on the same IPv4 address, or viceversa.
Although it's not clear whether it's a good idea, it seems it might be better to require that the domains actually have the address that was connected to in common rather than any address.
One server hosts multiple sites from different end entities, with a shared IPv4 address (since those are scarce), and obviously a cert that contains SANs for all the sites this server instance hosts. On IPv4 the server is configured to look inside the request to know which virtual site to serve up.
But for IPv6, they let the each site have a seperate ip address, and bind the virtual sites based only on IP address, since each one is unique, and we can avoid extra overhead of parsing the domain before dispatching this way.
They continue to reuse the multi-san Cert for these addresses, because having to manage extra certs when you already have one that works for all the sites would be silly.
Now if a connection is open to the attackers site via ipv6, and then Firefox tries to open the victim site, resolves the IP address, notices that it has an IPv4 match with the attackers site, so assumes it can just use the attacker site's IPV6 connection, since it was using a cert that lists the victim domain, and suddenly attacker site starts getting requests intended for the victim site, and can do things like steal credentials.
Now perhaps this scenario cannot happen. The bug report does not fully lay out which domains were involved and which ipv4 were assigned to which, and I'm not sure exactly how the reuse logic in Firefox works.
I'm also unsure if this proposed setup violates some http/2 RFC MUSTs, or anything like that, but it is not like it is a completely outlandish idea.
This is where your example falls apart, because that's not a reasonable thing in the real world. Shared hosts should be (and mostly are) using SNI[1], "Server Name Indication". SNI basically adds "Tell me which certificate you want to see" to the protocol, so your browser starts the request with "I'm trying to connect to Site1", and the shared hosting provider will have certificates for each individual site they're hosting, and pick the right one.
For a real example of how this is dangerout, see jsmith45's comment above about wildcards and specific sites.
[1] https://en.wikipedia.org/wiki/Server_Name_Indication
[nitpick] Private keys. The public keys are published right inside the certificate of course.
This seems like it could be ripe for abuse when the host is behind a Cloudflare like service, or a CDN with a shared anycast infrastructure. Often times these services will use the host name in the initial connection to determine the origin. While it would be very difficult to turn this into a targeted attack, I could imagine that spraying a number of discrete domains across those services may result in finding one or more interesting hosts.
This could also cause some quite unexpected behavior if your applications/infrastructure sits behind a common reverse proxy where all hosts share a *.host.tld certificate pointing to the same reverse proxy. Imagine static.host.tld serving you the login page, which also tries to make a request to api.host.tld which shares the same certificate and IP but with the given host name would have been proxied to a different backend server.
If your static.host.tld cheerfully claims to be api.host.tld and you gave it a certificate testifying to this claim, then it's really your fault when it can't serve API queries right?
Outfits like Cloudflare handle the mapping properly, even if they have arranged a single certificate for this.example, that.example and the-other.example, they care whether a particular TLS session says it's for this.example or that.example and the transactions will accordingly go to the correct place. This Mozilla behaviour doesn't cause any problems.
This is something a crappy bulk host probably gets wrong using Apache httpd since Apache doesn't make even the bare minimum effort to actually implement this correctly by default (When a TLS client says "Hi I want to talk to api.host.example" and there is no such host configured, Apache just figures the default host can handle it even though the standard explicitly says that's wrong...). Fortunately the crappy bulk host is almost certainly just mapping everything through a bunch of VirtualHost rules and you probably can't exploit it in the way you describe these days, although they should probably really use an HTTP server from somebody competent.
If I have a load balancer handling all these connections, and I routed a connection through to static-backend-1 then Firefox “cheerfully” decided to reuse this connection for api.host.tld, how is my load balancer which has already handed off the connection to static-backend-1 going to do anything about that?
So, the request for api.host.tld says "api.host.tld" on it. If your static server receives this request, but isn't able to service api.host.tld requests the HTTP/2 specification provides an HTTP error code to return 421, saying, oops, I can't help you with that - and the specification tells clients that in this case they might try asking via another route.
Indeed they do - via the SNI header, which is set once per TLS connection, not once per HTTP request. I would think initializing a connection for one domain and then reusing it for a different domain could absolutely cause problems.
If you build your multi-tenant webserver yourself you however obviously have to be careful regarding getting this right.
I tested on http://www.http2demo.io/ both with and without those settings enabled. The HTTP/2 version doesn't fail when those are disabled, but it shows the same result as the HTTP/1 version.
To me this behavior is an argument against Cloudflare more than it's an argument against HTTP/2, but mostly I hope that nobody is tempted to switch to Chrome about it.