This is a cool side-channel attack that makes use of two nuances in the web.
First, the cache storage mechanism in service workers allows a site's javascript to cache a 3rd party request it has loaded through the fetch API. Caches aren't thinking about timing attacks, and in general are performance sensitive, so it's reasonable to expect that larger resources will take longer to cache and this time can be observed by the page.
Second, you can generate facebook posts which are targeted to specific demographics - like age ranges or specific ages. This will generate URLs which will have a different page length when loaded by logged in users in the target demographic compared to others. It looks like this is possible because there is not an explicit 'access-control-allow-origin' header set on facebook, and while the 'x-frame-options:deny' prevents loading of the content, it can still be cached by a 3rd party.
The attack on Facebook (or any other website for that matter) works regardless of any Access-Control-Allow-Origin headers. The Fetch API has a mode "no-cors", which does not require CORS. Also: the cache being used is a programmable cache, which is distinct from the regular cache in the sense that any website can place any resource in it, regardless of the headers sent along with the response.
(I'm one of the researchers mentioned in the presentation.)
Then you have to diligently add every adult friend to that list forever, and there will be a gap between when you request their friendship and when you are friends with them and are allowed to put them on a restricted list.
Oh, I think I misread that, but it's slightly ambiguous - are we restricting blocking to specific people (Aunt Susan is uptight and shouldn't see my posts) or are we restricting post visibility to specific people? I guess Dylan probably meant the latter...
I disagree that the failure of Google+ means that nobody wants privacy settings, or in general that the failure of a business means that every single minor innovation they made was flawed. Google+ failed because of the network effect - a social network is only useful if it's also used by people you want to socialize with.
People consistently list privacy controls as among their biggest problems with Facebook, and circles solved that problem neatly once people figured out how they worked. You can of course replicate a circle with a user list on Facebook, but it's nowhere near as obvious.
I guess I'm missing the practical vector associated with the "compare" timing attack. All of the js source code is available to see in a debugger, as well as all of the stored memory values. If you've put sensitive information (a secret) in the browser, you've already failed...
First, the cache storage mechanism in service workers allows a site's javascript to cache a 3rd party request it has loaded through the fetch API. Caches aren't thinking about timing attacks, and in general are performance sensitive, so it's reasonable to expect that larger resources will take longer to cache and this time can be observed by the page.
Second, you can generate facebook posts which are targeted to specific demographics - like age ranges or specific ages. This will generate URLs which will have a different page length when loaded by logged in users in the target demographic compared to others. It looks like this is possible because there is not an explicit 'access-control-allow-origin' header set on facebook, and while the 'x-frame-options:deny' prevents loading of the content, it can still be cached by a 3rd party.