frenchy at gmail.com
Mon Mar 20 17:15:34 PST 2006
Tell me whether I'm understanding correctly the issue you're mentioning:
For example I have a service at:
If I was evil and knew the various services living at that intranet, I
could craft a document on my web site hoping to lure employees,
calling this service thru a CAXHR, or JSONRequest, even if I wasn't
able to read the payload of the request, because the
X-Allow-Foreign-Hosts (or X-Allow-Foreign-Documents-To-Read-This-Data,
heh) header was never sent.
This would cause all server logs to rotate! yargh!
But today, i'm already able to trigger such security hole in many
ways, using existing HTML and DOM constructs:
- I can create a hidden form, whose target is a hidden iframe
- as soon as the form and iframe are rendered, i can have a script
Now, thanks to cross-frame scripting policy, i won't be able to READ
the results of the request, which live in the hidden iframe. But the
request has already been made. Harm has already been done to a
I could also likely trigger a similar HTTP request by simply pointing
an image to that URL. I could have actually just made that service URL
the source of my hidden iframe. I could have made that service URL the
src attribute of a script tag. I could have opened that URL in a
pop-up window, before closing it on a timer ...
But the end result remains, i can't read or access the data that the
service returned. Even though I have caused evil: all log files got
Due to the fact that CAXHR and JSONRequest are pretty adamant about
not sending any cookies or cached basic authentication credentials,
attackers would likely resort to those methods last, likely preferring
an iframe, an image, a script, a form as first vectors of attack.
On 3/20/06, Gervase Markham <gerv at mozilla.org> wrote:
> Chris Holland wrote:
> > That's where the extra HTTP header would come-in:
> > "X-Allow-Foreign-Hosts": Forcing developers who expose such a service,
> > to make the conscious choice to expose data to the world, what Jim
> > refers to as "OPT-IN".
> I believe the usual objection to this (which was raised when I suggested
> something similar) is that some services respond to requests by doing
> something - therefore, a model which allows cross-site requests has to
> check that the request is permitted before making it, not before
> processing the result.
> I believe the Mozilla Foundation has done some work in this area using a
> top-level site-wide XML document to specify what services can be
> accessed cross-domain; but I don't know the details. Perhaps someone
> else can chime in with them.
I believe this might be it:
As suggested by Doron Rosenberg in this message:
More information about the whatwg