Hacker News new | past | comments | ask | show | jobs | submit login
My Heart Bleeds for OpenSSL (coderinaworldofcode.blogspot.com)
163 points by Tinned_Tuna on April 8, 2014 | hide | past | favorite | 79 comments



Depending on how deep we go, we could end up throwing everything away, the CA infrastructure, the cipher suite flexibility, the vast quantities of knowledge that surround the OpenSSL ecosystem and allow thousands of developers to work in a more secure manner than would otherwise be possible.

I want to cover the cipher suite flexibility quickly, just to point out how much of a fantastic idea it is. The server and client tell each other what their most-prefered ciphers are, then they agree on the strongest one that they both support.

This has massive benefits, for instance, when BEAST et. al. came along and effectively broke block cipher based OpenSSL for a while, we could jump to RC4 for a little while. When that was broke, we all hauled ass back to AES. This gives incredibly flexibility when dealing with many attacks, by simply being able to circumvent the affected parts.

This is frustrating.

All three of the major ciphersuite problems in SSL/TLS (BEAST, RC4, Lucky13) were well-known in the literature prior to their discovery.

The BEAST vulnerability --- chaining IVs across messages instead of making them explicit and thus unpredictable --- was described years before BEAST was published, and was presaged by Phil Rogaway in discussions about IPSEC.

It is possible that RC4 was known to be vulnerable all the way from the moment it was leaked to Usenet. Certainly, the statistical biases were well-known. Not just apocryphally, but in the academic literature.

Lucky13 was a product of MAC-then-encrypt constructions, and an instance of a specific attack Vaudenay described in the early 2000s.

TLS has "ciphersuite flexibility" that has enabled the protocol to hop from one vulnerability to another. Meanwhile, fundamental fixes (either explicit encrypt-then-MAC, or AEAD modes like GCM, or secure native stream ciphers) have demanded fundamental protocol changes, not just new entries in a list of ciphersuites.


I never realized RC4 was leaked onto Usenet and a mailing list anonymously: http://web.archive.org/web/20080404222417/http://cypherpunks...

It's interesting to read the very first reactions and evaluations: http://web.archive.org/web/20070423042054/http://cypherpunks...


Why do you use an archive.org link to my server, btw? (I don't care, just curious; I've been meaning to reprocess the archives and make them directly searchable once I find better tools)


Oh, nice.

I even almost managed to find a copy of my old "mystery" mail (mystery because I've been unable to find in the archives, even though I know it has to be in there, as evidenced by the old print-out I have):

http://veps.hypertekst.net/misc/anon-remail/

Did find the thread, though:

http://cypherpunks.venona.com/date/1996/03/msg00168.html

Any chance I could get a copy of the full archive? (Email in profile in case you don't want to make the raw archives available over http).


It's possible it was a message which didn't go to every node.


I just copied the links from wikipedia :)


It sounds like you are describing an "implementation problem" (i.e., OpenSSL's code sucks).

But then you suggest this could be a reason to throw out the notion of "ciphersuite flexibility".

Aren't these two separate things?

Perhaps the flexibility is good.

Maybe the problem is one of complexity and quality control.

Too many ciphers, and incoporating ones of dubious quality.

I still haven't seen anyone mention the other SSL libraries, e.g., axssl, polarssl, matrixssl, etc.

As for CA "infrastructure", what if the user uses OpenSSL's ca function?

She creates her own CA certificate and key and installs it on her device.

Then she downloads a website's certificate, signs it and installs it on her device.

Regardless of whether a wesbite has a paid-for certificate from a commercial "CA authority", she needs to make the final decision whether or not to trust it.

The user is the ultimate arbiter of which website certificates she wants to sign and install. (Not browser authors.)

Websites just need a central repository to publish their certificates.

They already do this for their "domain names" by having them published in a publicly accessible zone file (ideally, the user can download the zone file, as well as query it piecemeal over a network).

We as users trust that these zone files are accurate: specifically, we assume the IP addresses for the website's nameservers are correct.


Prior to TLS 1.2, all of the mainstream ciphersuites are bad.

The fact that you have to upgrade to TLS 1.2, which includes more than just new ciphersuites, somewhat counterfeits the idea that the ciphersuite mechanism provided much protection.

Ultimately, the protocol might have been just as well off by defining a single ciphersuite and accepting that a break in that ciphersuite would necessitate a protocol update.


This whole thing was totally mismanaged.

Cloudflare had 1 week's notice from these codenomicon guys, and they patched it and stayed silent (NDA?).

Then the codenomicon came up with a fancy name, bought a domain that I'm guessing they paid a fair sum for (for publicity?) and publicized it without notifying anyone (Amazon? Yahoo?, any Linux distro?)

I mean, openssl.org's own website was vulnerable until not too long ago.

I don't appreciate having to build custom RPMs in the middle of the night because my distribution had no prior warning.

I really don't appreciate this spectacle to advertise a security services firm.


Everybody wants prior warning, but very early in the process you hit a point where secret notices create more harm than help.

The worst situation you can be in is where everyone knows something is going on, but only a small group of people know what it is. At that point, the incentives strongly favor the attackers, who will work quickly to join the in-crowd and exploit the window of time where they know how to exploit the flaw, and site operators don't know enough about it to take it seriously.

It's especially hard to stage-manage these updates with open source software. Look at the patch; to someone who takes exploit code seriously, the vulnerability is instantly obvious. How was anyone supposed to give you a heads up without tipping off attackers?


I think we can agree that there is a happy medium between the slow and risky process of notifying every single organization, and only one SSL terminator "in the know".

The logistics of disclosure for something as ground breaking as this are probably very difficult.

I suppose the details are still sketchy at this point, but I have no idea how CloudFlare received a week's notice, and no one more important[1] knew of it until it was publicized. CloudFlare being the special group makes me suspicious of how it all went down.

[1] I'm not picking on CloudFlare, they have a huge footprint on the internet.


Apparently Akamai was also given early notice, from the OpenSSL team.

https://blogs.akamai.com/2014/04/heartbleed-faq-akamai-syste...

I wonder why the various linux distros like debian could not have been notified simultaneously.


Has distro notification caused problems in the past? E.g. patches posted early, emails forwarded, public discussion, etc.?


No advance channel set up?

This stuff is always tricky, and I admittedly have a strong adverse reaction to the "just blow everything open, it will be lulzy when the web breaks" attitude.

To be on such a list you also need to have a positive prior record demonstrating your ability to keep your mouth shut, which is hard to prove.


What about the Postgres bug from I think last year where instead of putting up a fancy looking site the people involved coordinated in secret with every major distro to release a patched package? That seemed to me to be much more professional. Or is this bug significantly different?


> At that point, the incentives strongly favor the attackers, who will work quickly to join the in-crowd and exploit the window of time where they know how to exploit the flaw, and site operators don't know enough about it to take it seriously.

Has this sort of behaviour been observed for other vulnerabilities in the wild that can be cited? I would think that the patch is small enough that a co-ordinated deployment to all the distros could have been managed so that they had packages ready before attackers could exploit the vulnerability. Not sure that the correct solution is to announce it with a corporate blog post and a vanity web site instead.


There's another name for the phenomenon described by tptacek above: insider trading. Plenty of examples in the wild. The more insiders know what's happening before the markets do, the more likely someone will act on it, or sell the information.


[deleted]


Someone still being on 0.9.8 and this not being vulnerable isn't too meaningful as an indicator. If they mysteriously went from 1.0.1f back to 0.9.8 without changing any other part of their configuration, that would be suspicious.

The lulzy thing about this bug is that people who are still on 0.9.8 aren't doing the right thing; they just got lucky on this one bug.


> Has this sort of behaviour been observed for other vulnerabilities in the wild that can be cited?

Yes. The mailing list that was used as recently as 2011 to coordinate exactly this kind of issue was backdoored.

Further reading: http://www.openwall.com/lists/oss-security/2011/03/03/3


The only people you can really give advance notice to are those who will keep it secret until the embargo is lifted. That is basically attackers (TAO), service providers (like Cloudflare and Akamai), and very small software development teams who don't release code to public repositories.

I'm sure glad they gave CF advance notice vs TAO (presumably). I'd prefer if it just dropped with zero notice for anyone, although if a security company wants to provide priority notice about vulnerabilities it discovers to people who pay the company in advance, good for them.



I fail to grasp on Cloudflare and Akamai are "service providers" and Amazon Web Services is not.


This is just speculation, but Amazon isn't known for playing very nice in the open-source community. Maybe they lacked the personal relationships required for someone in the know to trust them. OpenSSL, CloudFlare, Google … those are all open-source shops.


"The only way to get that level of assurance would be to build an automated verification suite, or similar."

Wait.....what? OpenSSL doesn't have an automated verification suite? What the?

Well there you go. That's step #1.

I completely disagree with others' calls for a rewrite, and I really don't like their attacks on OpenSSL developers (not that I know them or even know who they are.) How many of you have written software that has been tested as thoroughly in the real world as OpenSSL? Some, but not very many, I presume. OpenSSL has an installed base of billions and billions of machines, and it is mostly successful, and any software that sees that much use (and the corresponding scrutiny) is going to have vulnerabilities revealed. We should absolutely not, not, not rewrite the software and open ourselves up to whole new bugs, or worse, bugs that were in OpenSSL and patched years ago. It would be fine to start a new piece of software, with the hope of competing for SSL, but a widespread campaign to replace OpenSSL would be foolish.

The only possible caveat to this would be if it was developed more like clean room software a la NASA's probe/shuttle software, with every change agreed upon by a large committee of people, and no change made without weeks of planning. If it was replaced by a process so rigorous that it would not admit any bugs the new software might have a chance of competing with OpenSSL. And even then, a committee can't think of everything.

Everyone is starting from the reference point of perfect security and yet perfect security is impossible. Literally the best you can do is to have software that has been subjected to attacks over and over and over again and though it may have been flawed it has also been patched.


You don't need a committee to build rock solid software. There are all sorts of formal verification and modeling tools for specifying machine checkable specifications. The overhead in many cases is high so you're not gonna use it when writing a web 2.0 application but not when you're writing software that is a fundamental part of internet infrastructure then the bar needs to be higher. 1000 eyes making all bugs shallow is not the right approach in such cases.


Well, the 1000 eyes approach has worked with other open source, mission critical software, like Linux, which also doesn't have an official testing suite. If you have enough people invested in making sure something works, there's a strong incentive to uncover bugs now matter how obscure.

Security's always going to be an arms race in software.


I would love to know the price tags, ballpark informed guesstimates, for the following:

  * Audit of openssl
  * Audit of openssl + remediation work
  * Brand new implementation of ssl library  (100% feature parity not required)


> Brand new implementation of ssl library

Which will take some years to reach the security level of the current OpenSSL implementation (if at all).


Getting an idea of the effort/resources that have gone into openssl is a large part of why I asked the question.


What is all this rewrite talk about? Just use NSS. Done.


Not done. Would it surprise you to learn that "all this rewrite talk"--which was merely one question--was simple curiosity? I did not realize that asking a question would be such a problem for you. I have seen different suggestions of crowdfunding an audit/rewrite and I have have no concept of the price tag for such complex projects.


lern_too_spel has a point. Why write an entirely new SSL library when you can switch to an existing one that may already be mostly what you want?

https://xkcd.com/927/

Whether or not NSS is the one you prefer, that's up to you.

http://en.wikipedia.org/wiki/Comparison_of_TLS_implementatio...


What is so difficult to understand about someone being curious about the price tag for large crypto audits/development projects? I am sorry I do not have an xkcd comic to link to, hopefully I can explain this to you without pretty pictures: I am not asking about the price tag because of any specific software development project. I am asking because I am merely curious what the price tag would be. I have seen various discussions mentioning audits and rewrites as if everyone knew what the cost would be. I do not know what the price tag would be, and given the lack of genuine responses it seems a lot of people do not have any idea what the price tag would be.


Ok, I understand why you proposed this. But what's hard to understand is why you are taking a discussion of rational alternatives to your proposal with such... angst? negativity? Not sure what you're feeling.

Consider the converse situation. You are proposing a rational alternative to continuing with OpenSSL as it is. That's a reasonable contribution to a discussion and it is not taken as an attack against OpenSSL.

This is just a philosopher's discussion on a place called Hacker News in the middle of a work day. :)


By "all this rewrite talk," I meant the article, your comment, and several other comments in this thread together. What about my comment indicated that your question was a problem for me?

"Done," answered your question. $0.


How about this: I would love to know the price tags, ballpark informed guesstimates, for the following:

  * Audit of nss
  * Audit of nss + remediation work
For reference I think the truecrypt audit fund was $48k and the LAFS audit of spideroak came in at $10k.


Is OP intermingling discussion of OpenSSL (a particular codebaes implementation) from SSL/TLS the protocol(s), in a confusing way? Or am I confused?


You're not confused. OP is confused.


NPR is covering this story and confusing the library and protocol as well. I suppose most people that need to understand this will not be getting their info from these people.


If we could throw away CAs, cipher suite flexibility, ASN.1, general baroqueness, C, and the horrible cross-layer pain of SSL, I'd be fine with the hard incompatibility with legacy SSL.


What language other than C would allow code to be used from all mainstream languages/frameworks? This is a critical requirement for a lib like openssl, having N such libs for N languages would be a nightmare.


Something that compiles to C? Or can compile to C at a slight performance penalty for unsupported platforms?


Write it in safe, modern C++, with an extern C API.

I'm sure that leaves out a few rare cases on embedded platforms where they only have a C compiler, but they could continue using existing libraries.


I can't see why C++ would be any safer than C. For example Heartbleed like bugs in C++ are still very much possible.


It's possible to define a subset of C++ that can be statically verified to not have this sort of problem. This can also be done with C, but it's more difficult. There are actually people who write in overrun-proof subsets of C++.


Wow, I wish posts like yours weren't so rare. There is way too much undeserved hate heaped on C++ due to (imo) C programmers that pretend to code C++.


I generally agree that rewrites are a bad idea. However:

1. Competition is good. Business, open source... doesn't matter; some competition almost always leads to improvements.

2. Sometimes a rewrite can be justified when there's a real difference of design behind it. For instance, using a different language with greater safety (type safety and/or memory safety) might be the only way to get this to the level of security we expect.


Also if you're able to e.g. inject a deterministic random number generator into both libraries, you should be able to cross-check the crypto primitives.

Just create a server that spends all day and night throwing random inputs at the bleeding edge and released versions of all the libraries and see if they output the same values (i.e, either they are all equally broken or they all work).


All PRNGs have a seed, so you can always make them deterministic if you want.


Hm. I'll start looking into this if this could be turned into a general crypto/SSL/TSL-cross verification service. I'd need to wrap the various components of libraries into sandboxed REST-like webservices to enable more languages to take part and then I'd need some system to just generate semi-structured junk to throw at those implementations.


The author seems to discuss OpenSSL as if it were the protocol itself, not an implementation of it. OpenSSL != SSL != TLS


This particular breach is very bad.

That said, for a piece of software as widely used as this, what is the actual count of vulnerabilities found?

I've spent quite a bit of time in the openssl code base for various projects over the years. I find it generally readable, but then I've seen some pretty odd ball C code over the years.


I wonder if this author is aware that OpenSSL was once born from a different project, called SSLeay, started way back in 1995. Abbreviated history lesson:

--

People today get pissed off because RSA got paid by the NSA to make crypto weaker, but try living in a time when RSA literally controlled the entire trade of tools used to run the web securely, and could just prevent anyone in the US from using SSL if they didn't pay for it. It's funny that actually all the people who originally worked on getting SSL created or its implementation into free software eventually worked for RSA. Ironically, back in the day NSA invested millions in a campaign to destroy RSA by offering its own competing cryptosystem as the new defacto standard.

SSL was basically created by Netscape through the work of Taher Elgamal, whose name you might recognize. In the 80s he created a variety of cryptosystems, many of which we still use today in a variety of applications. SSL was created with RSA's cryptosystems and patented algorithms to provide the most security in the fastest way possible for web users. Since all of RSA's algorithms were patented, nobody could implement them (for commercial purposes) without licensing it from RSA. Strong crypto (anything over 40 bits) was also still considered 'munitions' and not allowed to be exported out of the country.

Back then there wasn't a great big free unified toolkit of crypto libraries for anyone to use. The people who were writing crypto software either worked for a university, or a corporation, or a government, and thus all had the tendency to keep their source code to themselves, and code and libraries were licensed instead of given out freely. But a mini revolution had started from the ashes of the homebrew/shareware communities. People started to create free software and give it away at no charge, there were people interested in making web servers and the like, and they could all share one standard library (for things like crypto) so they didn't all have to implement one themselves for every project.

SSLeay was created in order to have a free implementation of SSL and its cryptographic algorithms that wouldn't be subject to US export controls. This was reportedly a "clean room" implementation derived only from documentation, written from scratch by Eric Young in Australia in 1995. It was then used by Tim Hudson in America (amongst others) to add SSL support to basically every free application that could use it, like telnet, ftp, ncsa mosaic & httpd, apache, w3c httpd, lynx, mSQL, etc. Both these men later went on to create commerial versions of this library and leave the work on SSLeay behind, and from that was born the OpenSSL project to continue where they left off.

At the time it was possible to create and run free software that provided SSL access. Actually, companies like Verisign - the only CA allowed by some browsers, for a while - had to make policy changes to start allowing certs to be generated for Apache-enabled SSL sites. But it was not legal to make free SSL-enabled software in the US because the right algorithms had not been licensed by RSA to use in the states. It also wasn't legal to export any strong crypto from the country. You could only use things like Apache with SSL if you purchased a licensed add-on for use in the US, or applied a mod_ssl patch for use outside of the US.

Browsers were in a much worse boat [because they all depended on RSA if they wanted strong encryption] and thus were largely commercial ventures. When Netscape released its source code for the first time under the Mozilla label, all its strong crypto was removed to comply with US laws. A new project (Cryptozilla) had to be created to link SSLeay to the sources. It was then that the rest of the world could finally download a robust, modern, open-source browser with strong crypto.

All of this changed in 2000 when the RSA patents fell into the public domain and the US relaxed its export controls on crypto software. Right after that, Mozilla bundled its own RSA implementation in its NSS crypto library, and the rest (for Mozilla) had nothing to do with SSLeay or OpenSSL. But SSLeay (now OpenSSL) continued to be used by software all over the world.

--

There were a lot of different implementations of SSL/TLS with varying degrees of compatibility bugs. Most of the time a web server was developed in tandem with a web browser (or other similar client/server tools), and a library was created to implement the protocols they needed. This, of course, led to various problems between different implementations, even though if it was web code it was all using the same RSA toolkit for the relevant ciphers.

Even since those days in the mid-1990s, it's been obvious that a single library to handle all the weird implementations is both difficult to implement and very useful to developers. It's never been easy or bug-free. But even at the time, the library was written and re-written to get around things like export controls (the BSAFE SSL-C library was a rewritten version of SSLeay, by its original authors, so their new company could sell it as a non-US-based implementation of a US-patented algorithm).

Personally, I don't think it's offensive to the spirit of the original authors (or current developers) to suggest a fork or a rewrite. The main goals of OpenSSL are to have a free software implementation of SSL and TLS, and secondarily to provide a full-strength general-purpose cryptography library. It's not some abominable, indecipherable, impossible task to re-accomplish this. Hell, it was originally done by one ozzie by himself with just docs for reference. I think maybe the entire open source community can handle organizing a do-over.

--

(Side note: does anyone else realize that the HTML <keygen> tag has been around since Netscape Navigator 3, and still nobody uses it?)


> HTML <keygen> tag has been around since Netscape Navigator 3, and still nobody uses it?

StartSSL uses it for personal (email) certificates. I've tried using it in a toy VPN project but dropped the idea mostly because I was unable to easily explain how to use those certificates afterwards (i.e. export them out of the browser's keystore, import them to smartphone and use with VPN without horrendous tens-of-steps tutorials that'd bore anyone to death).

The issue is, almost nobody uses client X.509 certificates, so <keygen> is obviously unused, too. The core issue is, browser (and other software) vendors never gave a thought on improving horrendous UIs for certificate use and management. It's still hidden after 4-7 clicks away under some obscure names, and then it exposes almost raw X.509 data which asserts those casual users who found their way through the settings are scared away.

I mean, something in this direction is a must: https://www.dropbox.com/s/xxfzf9ed4wfytkj/tls_auth_ui.png


It seems like the browser vendors (Netscape) never intended for there to be a UI for a good reason. It's intended to be used to (basically) transparently install and use a client cert. Each device can get its own key pair/client cert, and no UI is necessary to manage them.


> no UI is necessary to manage them

Not that simple, unfortunately. Even with per-device certificates (which is, I believe, the proper way - have a "master" one signed by remote party then sign your own per-device certs with those), users will still have to:

1. Chose the certificate they want to present to a server. Current UIs are plain terrible and scare every non-techie user (and techies who don't know about X.50x) away. This is absolutely a must, unless Netscape intended for certificate auth to be completely transparent to end user (which is bad, because many of us have multiple identities).

2. See and be notified in advance of certificate expiration times. I have multiple overlapping for half an year StartSSL client auth certificates just because I may forget about the expiration (and miss the notification email). This is not absolutely necessary, but quite important feature if we'd suddenly start to take client certificates seriously.

3. Interoperate with system-wide keystore (if there's any) for the cases certificates're going to be used outside of the browser (say, for VPN or Email S/MIME). This is not an absolute requirement, but very useful feature to have. I understand that when Netscape Navigator was young there probably wasn't system-wide keystores, though.

4. Securely backup the certificates if they're meant to me used as a master ones, not per-device.


Look at it this way. If it was closed source, the fix would take longer, and the realization may have never come.

Honestly, seems like we are forgetting our roots on this. Dump it? Yeah and throw out years of stability and bug fixes. No thanks!


Being widely used is a problem. Monoculture leads to exactly this kind of mass vulnerability.


"But crypto is hard! Don't roll your own", say everyone, ever.


By coincidence, I remembered someone pointing this out to the Go developers no too long ago, after he found out they did in fact roll their own:

https://groups.google.com/forum/#!searchin/golang-nuts/opens...

In light of the current issue I looks like the D guys did the right thing.


Standards are important. For all practical purposes OpenSSL has become a standard.

TCP can be exploited, it doesn't mean we ditch TCP for another differently exploitable solution.

I see your point but I don't think of OpenSSL as monoculture the same way that Windows or OSX are.


OpenSSL isn't a standard any more than winsock. And fortunately, OpenSSL isn't the only library that implements SSL.


rewriting is almost always a mistake

I'd make an exception for rewriting in a language that prioritizes safety towards the top...

http://www.reddit.com/r/rust/comments/22gppc/when_life_hands...


I thought Rust was old hat? Haven't we all moved on to Node.js? Or it was Grep ou Gulp or something.

I find it hard to keep up.


Isn't Rust a newer language with a ton of recent activity? Isn't it in a different space than Node.js? I'm not following your comment.


Actually, this post reinforced my belief that we should dump SSL and replace it with something better and safer. It's just not worth the trouble at this point. The flexibility will arrive over the coming years, if there's an alternative with a lot more enthusiasm and resources put into it.


I do not want my customers' security to be based on "enthusiasm," and I can't see any way you could get something with more resources put into it than SSL in a near-future time frame.


Someone with more knowledge please correct me if this is inaccurate, but from what I have been able to understand this basically exposed 16kb of random memory each heartbeat? So the fear would be that a persistent attacker I guess could continuously run this and scan the output trying to match things like security certificates?

While I am sure this could be used on high value targets with enough resources, as a small web service outside of the tech industry I feel pretty confident that no one would invest those kind of resources to steal our SSL cert. Is that misguided?


Yes. It doesn't take a lot of resources. It's already been noted on Twitter that someone - possibly quite a lot of people - were using HTTPS Everywhere lists to find interesting targets. Mass hacking and drive-bys are trivial. And remember, it's not just certificates! It's anything that is in your sever processes memory (the process that implemented SSL), so it could be passwords or other credentials.


Maybe misguided, maybe not, but it's not too hard to get a new cert from your CA, especially if they revoke+reissue for free.


Another 'use namecoin's model' suggestion? Please, please, no. I don't want to place authority for issuing certs solely in the hands of people with mining hardware or botnets, or lots of money to buy form the aforementioned.


> My Heart Bleeds for OpenSSL

Looking at their web site, they have a link right at the top to buy a support contract and another to contract a team member. This level of publicity probably equals big consulting dollars for them.


Just think, Zuck could have paid $18bil for WhatsApp and used the $1bil leftover as OpenSSL or like bounty.

Better use of money and still of use to FB.


I don't think "progenerator" is a word.

Did the author mean "progenitor"?


Your site (blogspot's site?) breaks my back button something fierce.


That's Blogspot / Blogger. Hell, I don't even see any content with JS disabled. Enabling it, as you note, is worse.

From past experience, it's got to do with the dynamic themes Blogger offers. I can't be arsed to even investigate.


In Google (search) you can write `cache:` plus the URL and it shows you a static version (regardless if JS is disabled).


Well until Google breaks this feature you can see most blogger sites without Javascript using the RSS feed (in Firefox at least):

http://coderinaworldofcode.blogspot.com/feeds/posts/default?...

In other browsers it might not render the RSS XML feed as a page, Chrome just shows the raw XML. I use this bookmarklet for the process:

    javascript:window.location=window.location.protocol+"//"+window.location.host+"/feeds/posts/default?alt=rss&path="+window.location.pathname;


Thanks, though I avoid Google where possible these days.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: