by gsm » Sun Mar 07, 2010 7:40 am
Giorgio Maone wrote:
Some sites which have complex cross-site relationships requiring authentication will likely break. That's why ABE allows very fine grained tuning, but you need to know what you're doing.
Do you mean something like OpenID or just poorly-crafted multi-domain applications? I see just abuse in these cases, and I feel that such should be banned without strict CORS compliance. Do we have a new "web for IE" problem here?
gsm wrote:
Q4: What is the status of ABE and surrogates? Can we expect an adequate GUI?
Some GUI is planned for both, but currently priorities are differen (fine grained permissions, enterprise deployment, and a huge surprise to be revealed during the next few week).
Thank you, I can't wait to see some headlines
gsm wrote:
Q5: How does G.Maone's proposed specification accord with W3C CORS? Will developers have to implement both? What' swrong with the current specification?
They're not mutually exclusive. Actually ABE can leverage on CORS (partially, since ABE is more fine grained under some aspects) to be implemented as a proxy.
Is someone at W3C involved? I see ABE's pushable rulesets a bit like quick and dirty workaround until CORS is fully implemented. Web developers already have much hassle to implement SEO, robots.txt, now CORS and here's ABE. Then there are dirty JSONP hacks and Yahoo proxies, etc.. Doesn't a new duplicate spec. lead to new "separate webs"?
gsm wrote:
Qx: Is it possible to block all foreign requests (such as yahoo.com getting images from yimg) using NoScript? If so, which content-types are allowed? It's cross-domain, with parseable data.
No content type allowed:
(but looks definitely too much drastic to me)
Thank you. But that does not solve the issue.
Yes, blocking everything is useful more for Thunderbird, etc., not for a general browser. Yet, we had a huge problem a few years ago with webmail and embedded web bugs. Currently, Google and Yahoo do a good job filtering these. Now it is generally impossible to parse foreign IFRAMEs, which is good. NoScript blocks Javascript, and its side effect - JSONP (I feel executing 3-rd party scripts, even from Yahoo or Google is just plain abuse of user's trust). AFAIK, Firefoxes block foreign XSLT and active elements of SVG and webfonts...
Then, what if a new vulnerability is found? We had such cases with JPEG, Adobe Reader... For example, I'm developing something like CSS HTTP Request as a non-scriptable and browser-parseable replacement for JSONP. Consider that this becomes popular... Someone might develop different hacks. And there suddenly a parser bug is found. Should all the Web of Mashups pause until browsers update? We live in a world of exceptions - and content-types are an example. I'm not crying for a new feature, I'm just asking - how would a security expert act and what would he recommend? What would he recommend for a developer?
[quote="Giorgio Maone"]
Some sites which have complex cross-site relationships requiring authentication will likely break. That's why ABE allows very fine grained tuning, but you need to know what you're doing.[/quote]
Do you mean something like OpenID or just poorly-crafted multi-domain applications? I see just abuse in these cases, and I feel that such should be banned without strict CORS compliance. Do we have a new "web for IE" problem here?
[quote][quote="gsm"]
Q4: What is the status of ABE and surrogates? Can we expect an adequate GUI?
[/quote]
Some GUI is planned for both, but currently priorities are differen (fine grained permissions, enterprise deployment, and a huge surprise to be revealed during the next few week).[/quote]
Thank you, I can't wait to see some headlines ;-)
[quote][quote="gsm"]
Q5: How does G.Maone's proposed specification accord with W3C CORS? Will developers have to implement both? What' swrong with the current specification?
[/quote]
They're not mutually exclusive. Actually ABE can leverage on CORS (partially, since ABE is more fine grained under some aspects) to be implemented as a proxy.[/quote]
Is someone at W3C involved? I see ABE's pushable rulesets a bit like quick and dirty workaround until CORS is fully implemented. Web developers already have much hassle to implement SEO, robots.txt, now CORS and here's ABE. Then there are dirty JSONP hacks and Yahoo proxies, etc.. Doesn't a new duplicate spec. lead to new "separate webs"?
[quote][quote="gsm"]
Qx: Is it possible to block all foreign requests (such as yahoo.com getting images from yimg) using NoScript? If so, which content-types are allowed? It's cross-domain, with parseable data.
[/quote]
No content type allowed:
[code]
Site *
Allow from SELF
Deny
[/code]
(but looks definitely too much drastic to me)[/quote]
Thank you. But that does not solve the issue.
Yes, blocking everything is useful more for Thunderbird, etc., not for a general browser. Yet, we had a huge problem a few years ago with webmail and embedded web bugs. Currently, Google and Yahoo do a good job filtering these. Now it is generally impossible to parse foreign IFRAMEs, which is good. NoScript blocks Javascript, and its side effect - JSONP (I feel executing 3-rd party scripts, even from Yahoo or Google is just plain abuse of user's trust). AFAIK, Firefoxes block foreign XSLT and active elements of SVG and webfonts...
Then, what if a new vulnerability is found? We had such cases with JPEG, Adobe Reader... For example, I'm developing something like CSS HTTP Request as a non-scriptable and browser-parseable replacement for JSONP. Consider that this becomes popular... Someone might develop different hacks. And there suddenly a parser bug is found. Should all the Web of Mashups pause until browsers update? We live in a world of exceptions - and content-types are an example. I'm not crying for a new feature, I'm just asking - how would a security expert act and what would he recommend? What would he recommend for a developer?