Yeah, I think this is one of those domains where frictions (or lack thereof) matter. It is much faster, cheaper, and easier to send out a significant volume of material on the Internet than through the mail. In theory, somebody could send a bunch of obscene trolling letters, but having to lick all those envelopes seems to deter most of the people who'd be otherwise tempted.
No, but most people think being informed is. It's about power dynamics, understanding how that works doesn't mean you have to act like an asshole and manipulate your friends.
And it's something I wish more people would understand, all the legal and ethical rules in the world don't mean shit so long as God either doesn't exist or does exist but doesn't care.
> In 4 years of working cases I would estimate 1 in 6 jurors are above the 85-115 IQ range of average intelligence, and maybe half are at or below the 100 line.
Maybe I'm missing the joke, but isn't IQ meant to follow a normal distribution with a mean/median of 100 with a standard deviation of 15, in which case you'd expect half of jurors to be below 100 and ~15% to be above 115, which is pretty close to what you've seen?
What I mean to say is that if the average intelligence person, especially those over the age of 50-55 is very susceptible to believing deepfake video, then any video you can successfully show a jury of 12 over 50's people will likely fool 1/2 by default, 10/12 more likely than not, and 12/12 at least half the time.
If you're in a case where there is the possibility of deepfakes being used against you, you had better hope that either your jury is mostly in the 25-45 range and above average intelligence or that your lawyer knows how to deal with those videos since they'll get to review them before they are shown.
I think a modern USA 1 year old has about a 99.97% chance of making it to adulthood. That means that if a modern USA adult loses a young child, there's a decent chance they don't know anybody who has had that experience.
The ancient (and even, as you point out, very slightly pre-modern) world had a lot of "infrastructure" in place to deal with this, there were rituals and ceremonies and familiar people who knew what you were going through, and most of that is gone now.
It's not gone. It's just less common, and, at least in my experience, hidden inside churches where people are open about this sort of thing, and where, in a lot of them, miscarriages are treated as much the same thing, to be grieved over, as loss. Sometimes in private, but it's better when it's shared, because others have gone through the same thing, suffering silently.
But you're right, it's far harder to go through an experience alone, and loss of a child has certainly become far, far less common than it used to be. At least, if you limit it to the born.
In the US, in 2023, 1 in 3 never made it to birth.
I was going to ask about this number, because it seems high enough to be statistically improbable, but back-of-the-envelope arithmetic says otherwise: there are about 10 cases of pancreatic cancer per 100,000 people per year [1], so let's say each person has a 1 in 10,000 chance of a diagnosis each year. If you know somebody for 50 years, there's a 1 in 200 chance they receive a diagnosis in that time, so you'd expect to need to know 2000 people to eventually know 10 diagnosed people. 2000 is a lot, but "knowing" a person is a pretty loose term, and pancreatic cancer has a miserably high death rate within 5 years, so it's unfortunately plausible.
This is people I know or my family knows. My mother knows 4-5 people. I've had 2 coworkers die of it. I've had 1 in-law die of it. It's crazy how fast the numbers add up.
Keep in mind that there could be clusters of cases related to environmental contamination so it's very possible that some people know more people who get a particular form of cancer.
IMO the confident Tweets about how to become a good researcher and hire good researchers look pretty weird next to the lack of any apparent research papers (or even visible research products) five years later.
>I moved from Sweden to Ghana, West Africa. I started working as a teacher in the countryside, but after invoking the spirit of their dead chief, they later annotated me the king of their village.
I knew a guy like this. He was fairly transparent about the fact that he talked a lot of bullshit about himself, in the sense that he didn't keep it realistic and didn't respond to reasonable skepticism the way a normal person would, but a lot of people responded positively to it. The people it worked on, they recognized that he wasn't speaking literal truth and mentally discounted what he said by some large percentage, thinking that they were critically adjusting for his boastfulness. But he painted such a grandiose picture of himself that anyone who subconsciously assumed that 10% of it was true would still be impressed.
He was a weird dude. I think he grew up in a household where outrageous boasting was common and had been doing it so long that it was his authentic self. He desperately wanted people to be impressed with him, and he had mastered a strategy that had a consistent yield, but the people it didn't work on looked down on him pretty severely, and he didn't have any way of earning their respect. He had to trust in the fact that over time he would tend to find himself surrounded by people he could manipulate.
Indeed, I realized it's hard to compete with PhD students for grants, and subsidising my work with content marketing does not fit my style, and I prefer owning my work and choosing my own research direction. I also want people to use my work, and create solutions that are cost-effective.
So the most logical way was to bootstrap an AI start-up in the area I'm interested, so that's what I'm doing. Unfortunately, it's hard to publish or contribute to open-source, since it becomes too easy to copy, which cuts my margins and ability to fund my research and compute.
Now I spend most of my days doing AI research, and outsource most other parts, really enjoying it :)
I think you'd attract a more sympathetic audience if you reframed Emil's Story as "How to get people to leave you alone and let you think." E.g. half of your AI autodidact degree boils down to 'make money with what you\'ve learned', and it's always interesting to see people unpack their approaches to this idea. Your thinking around target numbers would be valuable.
I agree, it was written some time ago, and the language is hyperbolic, but many of the key points still hold true. Building a skillset to reach $100/hour consulting gigs was key to work 2-3 days per week, have time for research, and saving money for an ML rig, while living in Paris. Around $5K MRR is sufficient to live off, $10K MRR is more comfortable and allows renting a few extra A6000 Ada and start outsourcing, and $20K MRR to afford full outsourcing to focus on research and renting 8xH100s. That's a good goal to balance research freedom with other responsibilities. After that, it's optional to trade research freedom for higher MRR and more responsibilities.
Sure, so it seems fair for him to offer advice on how to apply AI or start a company based on it, but doing AI research means generating new knowledge about AI itself, and I don't see any evidence of that.
> doing AI research means generating new knowledge about AI itself, and I don't see any evidence of that.
Wouldn't colorization count as research? In vision domain there are a lot of papers like this. Just arranging and rearranging known blocks and getting SOTA result on some datasets. ;)
How do you arrive at this conclusion? Individuals don't have to tell you where their money comes from. They might even be easier to influence/buy than the people inside the big news institutions.
Mostly by observing the sacrifices they make for their coverage, especially over several years. There’s always an element of trust, that’s life. I’d direct anyone to Seymour Hersh, Glenn Greenwald, and Matt Taibbi for example.