Relying on birth control pills and other primarily woman-oriented techniques leaves 100% of the downsides to the woman (from health risks to emotional challenges and beyond). Comparatively, a condom has barely any downside for anyone involved.
70% is a bit dramatic. The pills do not protect against STDs. Also, when I was a teen, I did the math. Having 70% "worse" sex was actually 100% better than having 0% sex.
No, especially in fields that have as much citations and papers on average as ML/DL/RL.
If a paper in any other field was cited 44k in ~8 years, it must have made life-changing discoveries. Maybe some of the early papers on COVID-19 will reach that number.
If it doesn't count for prevailing wage, does it count for the evaluation of the compensation distribution in the first place (genuine question, I'd assume no for consistency of measurement but yes if e.g. W2-reported compensation is used as reference)?
As far as I understand there is no quick magic algorithm to find them: you train the full architecture as usual the long and hard way, then you identify the right subnetwork and you can retrain faster from the architecture and initialization of just this subnetwork
I can totally relate :) The hackery you had to do to have the whole stack work was significant and helped a lot grow sysadmin skills for me, and to build fresh versions and hack up new plugins was a great motivation to learn to code in a team-oriented setting.
Hasn't Ubuntu switched back to Gnome recently?
Since ~2009 most of the effort was Canonical-sponsored. The C++ rewrite was quite a dealbreaker in the flow of the project (which was originally written in C). The time it took to get the rewrite up and running and the language switch had a great toll on the motivation of the contributors.
> Most Japanese Tweets are 15 characters while most English Tweets are 34
Doesn't the graph shows that most English tweets are 140 characters instead? The first peak on the English curve is indeed on 34 characters, but it's lower than the 140 peak.
You're right about the mode, but the quote is also correct in the sense of the area under the curve. Most English tweets are 34+/-n characters, where n gives you more than 50% under the curve. It rapidly falls off from 9% on the 140 character side.
Actually it seems like WebGL is doing it even faster. Which makes sense - machine learning involves a lot of matrix math, which GPUs are made for and CPUs aren't.
This is actually what I'd expect, but the website feels quite misleading. Advertising that a GPU-based approach can outbeat a CPU for neural nets is not a very strong commercial claim :)
1) as I mentioned elsewhere, the explicit coaching happens when you submit a photo that Keegan think is bad (say score < 5).
2) Mostly true, except that it's surprisingly good at capturing the notion of good timing, which I did not expect from ML ^^
3) well, as for the Mick Jagger picture, I can see why Keegan dislikes it: it's simply not that good technically. What makes it great is that it's Mick Jagger and that he has a very deep sight (imho), which the model indeed does not capture. For the other picture, yeah, the score is not that good, but the comment is spot on (in my humble and very biased opinion) :p
Thanks for the feedback, and yeah, definitely needs more love before it's a perfect predictor! Nicely enough, classification tasks with tons of supervision still make a lot of errors too ^^ (http://pbs.twimg.com/media/CdOxQRbWAAEUZM6.jpg)