CoralCDN
-
I've just been looking at this as a way of reducing load from resources such as JS and images on some sites. It looks like a quick and easy way of offloading static content to a distribution network, which has to be a good thing. What worries me about it, is there doesn't seem to be (or at least, I can't find) much discussion about the security implications of including HTML and client script code from unknown third party servers. Is there some kind of security model or checksum system to prevent rogue proxy operators from injecting arbitrary code which could harm site users? Eeek...
-
I've just been looking at this as a way of reducing load from resources such as JS and images on some sites. It looks like a quick and easy way of offloading static content to a distribution network, which has to be a good thing. What worries me about it, is there doesn't seem to be (or at least, I can't find) much discussion about the security implications of including HTML and client script code from unknown third party servers. Is there some kind of security model or checksum system to prevent rogue proxy operators from injecting arbitrary code which could harm site users? Eeek...
-
Yes, if you want to pay for it :) One of the benefits of Coral is that it appears to be free. It's also apparently distributed, and it seems that the organisers don't necessarily have control over the behaviour of the individual proxies, and a user (end user or web host) can't specify which to use. Now obviously free is no good if it means abandoning pretty much all security and control over your website, and handing it over to unknown third parties, which is why I'm trying to figure out if this has been considered and mitigated (via some kind of central control / checksums etc). With my current understanding, an end user gets content from CDN nodes which are closer to them where possible - which would make it even harder to track down if a rogue node had done something bad to some of your users - you wouldn't know which one, and as a site operator, you probably wouldn't see any evidence that it had happened. If a node started scanning content for keywords, then serving up competitors or inappropriate sites or content that would be bad. If the node started injecting bad client script for attacks on users that would be terrible - and hard to detect. Now presumably this is either somehow mitigated, or an accepted risk of such an open and free system - what worries me is that I haven't seen much discussion about it - are people unknowingly at risk, or have I got the wrong end of the stick?
-
Yes, if you want to pay for it :) One of the benefits of Coral is that it appears to be free. It's also apparently distributed, and it seems that the organisers don't necessarily have control over the behaviour of the individual proxies, and a user (end user or web host) can't specify which to use. Now obviously free is no good if it means abandoning pretty much all security and control over your website, and handing it over to unknown third parties, which is why I'm trying to figure out if this has been considered and mitigated (via some kind of central control / checksums etc). With my current understanding, an end user gets content from CDN nodes which are closer to them where possible - which would make it even harder to track down if a rogue node had done something bad to some of your users - you wouldn't know which one, and as a site operator, you probably wouldn't see any evidence that it had happened. If a node started scanning content for keywords, then serving up competitors or inappropriate sites or content that would be bad. If the node started injecting bad client script for attacks on users that would be terrible - and hard to detect. Now presumably this is either somehow mitigated, or an accepted risk of such an open and free system - what worries me is that I haven't seen much discussion about it - are people unknowingly at risk, or have I got the wrong end of the stick?
Ah if you're talking free... This is what the Coral Wiki has to say about it:
Ultimately, we certainly want and hope that many third parties run Coral nodes, so that Coral can grow into a world-wide network of thousands of computers. For now, although the source is available via anonymous CVS, we'd prefer to run a network of several hundred machines on PlanetLab that are under our control, to enable easier maintenance, debugging, and pushing our regular changes, bug-fixes, and new functionality. However, feel free to use Coral regularly! In fact, we welcome your help and feedback as users. Furthermore, there are more serious security issues we will have to handle once Coral is run on untrusted clients (one can think of the current deployment as "trusted", similar to commercial CDNs.) Until better security protections are in place, we want to retain control over Coral nodes.
-
Ah if you're talking free... This is what the Coral Wiki has to say about it:
Ultimately, we certainly want and hope that many third parties run Coral nodes, so that Coral can grow into a world-wide network of thousands of computers. For now, although the source is available via anonymous CVS, we'd prefer to run a network of several hundred machines on PlanetLab that are under our control, to enable easier maintenance, debugging, and pushing our regular changes, bug-fixes, and new functionality. However, feel free to use Coral regularly! In fact, we welcome your help and feedback as users. Furthermore, there are more serious security issues we will have to handle once Coral is run on untrusted clients (one can think of the current deployment as "trusted", similar to commercial CDNs.) Until better security protections are in place, we want to retain control over Coral nodes.