More seriously though: others have pointed out that finetuning is pretty popular in some subfields, but it's just one hammer in a of a whole toolbox of techniques which are necessary to make neural nets train (even when you have a tonne of data). Standardisation, choice of initialisation, and choice of learning rate schedule all come to mind as other factors which seem simple, but which can have a huge impact in practice.
Of course, each tool has its limitations. The most obvious limitation of finetuning is that you need a network that's already been trained on vaguely similar data. Pretraining on ImageNet is probably not going to help you solve problems where the size of objects matters, for example, because most ImageNet performance tends to benefit from scale invariance.
I wish you luck with nanonets.ai, but I think it's irresponsible to market this as the "1 weird trick" to bring data efficiency to neural nets.
"The Royal Family" gets similar treatment. I suspect that it's a language thing---there are only two Anglosphere countries with presidents, and the other one (Ireland) has ~1/50th the population of the US and lots of Irish speakers, so if you see an English language post about "the President", then there's a good chance that it's referring to the US one. I'd like to see speakers of other languages (or Anglophones from outside the Anglosphere) confirm or refute this hunch.
In Germany elections don't rely that heavily on the candidates so it is hard to say. But generally speaking there is no clear identification as well, (except for the fact that the address ends on ".de")
Or perhaps it was an attempt to destroy the idea that individual freedom and democratic accountability present an attractive alternative to an Islamic theocracy? Certainly, you can see how the very idea of personal freedom is an affront to an ideology based on submission to God and to religious authority.
What better way to discredit freedom than to turn it into an illusion?
A trip to Alpha Centauri would be decidedly intra-galactic; the distance from the Sun to the edge of our galaxy is on the order of 4300 times the distance from the Sun to Alpha Centauri.
Yes, calling it "cross" galactic would certainly be stretching the definition of "cross". Though I suppose a "cross country" race is across the "countryside" though not across the country. ;)
If you still like the idea of buttons, but don't want to give up using the vendor-supplied buttons themselves, you can use the "two clicks for more privacy" jQuery plugin[0], which only loads the actual button when the user enables it by clicking a greyed-out placeholder.
I'm also partial to Socialite JS [0] - it provides a similar mechanism, the one I use is the hover option, this negates the need for clicking which I think would confuse some users.
1) Served over raw HTTP so that Uncle Sam can add your request to his database.
2) Twitter and Facebook buttons up in the top right-hand corner in case you feel like associating your real-world identity with trollthensa.com.
3) The Big G's analytics running in the background just in case Twitter and Facebook didn't collect enough data.
4) All Javascript and typefaces served from CDNs.
5) Email hosted on... GMail. Because Google haven't been accused of granting the US Government carte-blanche access to their users' emails. Ever.
But don't worry, because they got their domain through DomainsByProxy, notable champions of Internet freedom!
This is why the PRISM leak won't change anything. It's all well and good to say "I'm OUTRAGED over what's going on", but it doesn't absolve you of the responsibility to start taking privacy seriously and reducing (or eliminating) your dependence on services which could trivially compromise your users' data.
Here[0] is where Facebook defines what they consider to be "hate speech":
> Content that attacks people based on their actual or perceived race, ethnicity, national origin, religion, sex, gender, sexual orientation, disability or disease is not allowed. We do, however, allow clear attempts at humor or satire that might otherwise be considered a possible threat or attack. This includes content that many people may find to be in bad taste (ex: jokes, stand-up comedy, popular song lyrics, etc.).
Granted, it's a horribly distasteful image macro, but Facebook acted according to their written community standards, so I'm not sure why their response is "shocking". It's also important to bear in mind that the page is titled "Offensive Humor at its Best", so I'm not sure why it comes as a surprise that the images are, well, offensive.
> ANd thanks for the cliche that poor = robber/criminal
Parent was pointing out that a reasonable government welfare program does a lot less to foster resentment than a free market "fend for yourself" approach.
> I am sure many honest poor people appreciate your point of view.
Honest people still have to eat. Poor people don't (usually) choose to be poor, and if you force them into a corner with no other alternative, they will either
a) Die
or
b) Resort to crime or some other undesirable activity to prevent themselves from dying.
https://www.google.com/trends/explore?date=2014-01-01%202017...
More seriously though: others have pointed out that finetuning is pretty popular in some subfields, but it's just one hammer in a of a whole toolbox of techniques which are necessary to make neural nets train (even when you have a tonne of data). Standardisation, choice of initialisation, and choice of learning rate schedule all come to mind as other factors which seem simple, but which can have a huge impact in practice.
Of course, each tool has its limitations. The most obvious limitation of finetuning is that you need a network that's already been trained on vaguely similar data. Pretraining on ImageNet is probably not going to help you solve problems where the size of objects matters, for example, because most ImageNet performance tends to benefit from scale invariance.
I wish you luck with nanonets.ai, but I think it's irresponsible to market this as the "1 weird trick" to bring data efficiency to neural nets.