PhD Scientist/Fullstack Developer. 3 years in the industry after 8 years of PhD and PostDoc in Geophysics/Physics. Strong mathematical and analytical background, proficient in ML methods. Self-motivated problem solver and a fast learner. Used to remote work w/ Agile methods.
Developed and maintained/maintaining many projects written mainly in Vue, Angular, Django and hosted on DO, Netlify, AWS, GCP. My strongest skill is to understand the basics and apply them as code, like creating a performant magnetic lasso tool based on Dijkstra's algorithm in TypeScript.
I am especially interested in helping environmental/climate causes but anything that requires strong problem solving skills interests me. Feel free to send an email even just to chat.
PhD Scientist/Fullstack Developer. 3 years in the industry after 8 years of PhD and PostDoc in Geophysics/Physics. Strong mathematical and analytical background, proficient in ML methods. Self-motivated problem solver and a fast learner. Used to remote work w/ Agile methods.
Developed and maintained/maintaining many projects written mainly in Vue, Angular, Django and hosted on DO, Netlify, AWS, GCP. My strongest skill is to understand the basics and apply them as code, like creating a performant magnetic lasso tool based on Dijkstra's algorithm in TypeScript.
I am especially interested in helping environmental/climate causes but anything that requires strong problem solving skills interests me. Feel free to send an email even just to chat.
There is an assumption that "old" means "experienced" which is not necessarily true. There are people who transition to programming from other fields and are "old". Their value shouldn't depend on whether or not they fit the stereotype of "old timer hacker geeks". They should be judged by their skills, the same way the young ones are being judged.
For sake of argument though, can we assume all “old” programmers in this case are ones that have been doing this a long time and are actually competent? I’m not disagreeing with you, but I know those are in the minority of us “old” programmers. “Old” vs “Young” can be a measure of experience.
This (and original) statement(s) are all good and nice, but meaningless (in the same way as "there should be no hunger in the world because we have enough food").
Ok, we should judge people by what they can and do contribute to the team. The billion-dollar question is, HOW? Humans are extremely skilled optimizers - give them an objective metric and they'll game it for maximum benefit; make it subjective and it can be better or worse - but in either case, you'll no longer have any agreement on "what they contribute to the team".
How? Work with them. Pay them enough as they don't have to think about it.
Objective metrics are good if you think you can measure accurately for all individuals at the macro level. You may not need that (I mean, really _need_ it for your business to function, unless you're in the metric-tracking business).
If your teams have distinct and articulate enough outputs, you can retribute each team according to its specific outputs. At the macro level, you don't need to have more details (but you still may ask).
Let the team manage its own distribution of the retribution - and so on (so you do need trained managers to do it appropriately at each level).
I'm not saying "people can't be managed" and "one can't evaluate the performance of a programmer" - that'd be obviously false.
What I _am_ saying is that it's an art not a science - very hard to teach, impossible to scale.
Take your example with "Documentation" - if you measure anything objective (words written, number of functions/features that have associated documentation) and tie it with the performance evaluation, those metrics will soon become meaningless.
Poor metrics are meaningless regardless of what you try to measure.
In the example of documentation a better metric might be a composite value of
(i) how many wiki articles a developer has written, weighed 0.33 and
(ii) how useful on a numerical scale his/her peers rate the documentation produced, weighed 0.67.
This is just an example but it's a composite of perceptual and objective, with the perceptual being a lot more important (and to prevent gaming the system by writting lots of useless articles).
The way I've experienced it is that when a measurement becomes a metric, it ceases to be a good measurement. When people's bonus is tied to a measurement, they will find creative ways to influence that measurement which, in many cases, defeats the purpose of what it was trying to measure in the first place. It's not simply a matter of choosing the right measurements. It's also a matter of how you incentivize those measurements.
When I worked at Microsoft one of the teams I was adjacent to had a metric on number of new apps in the Windows Phone app store. So the teams went out and got a bunch of college students to build shitty apps in bootcamp style working groups. Suddenly the number of apps isn't good enough, so they added review metrics. Now those teams add a "let's all rate each other's apps" portion to the bootcamp taking you even further away from the results you're trying to obtain.
It's all in the details. If I get to pick who rates "how useful", this translates into a general "how well liked by my colleagues am I" (general problem with 360* feedback; I'm not saying it's useless, but it does have its limitations)
If it's true that one can evaluate the performance of a programmer, what does that mean? Can it be reduced to a vector of scalar values?
I hate doing performance reviews, and I especially didn't like when at a former employer I was asked to stack rank programmers. I feel like it's a task that's so difficult to get right, I can't even think of a wrong way to do it that's useful.
> what does that mean? Can it be reduced to a vector of scalar values?
No, it certainly doesn't mean that "your performance was [7.23541 8.1241 34.412515 .52632 995.154]". It means "you did good/ you did very good/ you need improvement", with some details like "<these things> you really handled well and I appreciate it; maybe you can work a bit on <this area> though". I definitely agree one shouldn't stack-rank people - especially in a good team (it's entirely possible, desirable even, that everybody did a great job)
Re initiative. I've had jobs which did not want initiative on a macro level - my manager would tell me what was wanted, I'd go away and make it happen. Those managers loved me. My current role, bullshitted into a "devops" role, I'm expected to spend my days showing initiative - building things that I think will be useful - and which never get used. I'll take the former jobs. What I have now sounds like freedom, it feels pointless and torturously demotivating. Do my managers know what they're doing? Do they have any roadmap? Can they see shortcomings in our systems? Is it mushroom management or cluelessness? I think these positive sounding words eg "initiative" are not positive - they are context dependent. "Potential" - if you need someone writing endless crud, if they have potential are they going to stick around? Do you really want/need Achilles? Are you really leading the Trojan army? Maybe you just need sticky tape, not a welding torch.
You touch on two issues here: one, that there's an entire gamut and not a binary "senior/junior"... and management just LOVE to present requirements like "Can walk on liquid water; God asks him for advice on how to run the world" when you ask "what do I need to do in order to get to next level", especially in big corporations [1].
The second is that different types of people prefer different styles of working and sometimes forget there is another side (multiple "other sides", really). A team needs to be a mix of capabilities and personalities to be successful - a team of 100 identical individuals would likely fail, even if all them are Peter Norvig). So, you need people that are "senior" in the sense that "knows the business well enough to provide different perspectives" (and for those "shows initiative" is critical) but you also need specialists where "senior" means "knows technology X really really well". Say, a DBA - can keep your database up, can write efficient queries to retrieve information that you want; but doesn't know sh*t about what information would be interesting to retrieve. Or whether it's a good idea to keep some information X in the database, considering the various business, legal, social, cost perspectives.
> Do my managers know what they're doing?
Here's the thing: I believe nobody _really_ does. Sure, some know more than others, but in absolute terms, we're all basically guessing. That's why people insist on engineers that "show initiative" - not because they're always right, but in an environment where we don't _really_ know what we're doing, people yelling different perspectives are valuable.
However, as mentioned above - not all senior engineers want or are inclined to show this kind of initiative; and it's unfair to penalize those kinds of engineers, because we _need_ the different types of personalities.
[1] FWIW I believe the reality is , always, that you need to (A) work on a successful project; and (B) be generally liked by your colleagues and maybe managers. For very senior titles, also (C) have a large network of connections within the company - i.e. work on many things, or on one thing but a thing that is used by many teams/ really popular)
That is why it is so empowering to have options. I have two employers: one is among the largest and most profitable companies in the US and the other is US Army. In one of those initiative and leadership are rewarded at every level. That makes things interesting because it provides the freedom to rapidly experiment and prototype leadership to empower people the way I experiment with a side code project. In this environment the people you are leading are more eager to learn new things and be creative than my corporate coworkers that need a framework to do anything and panic at the slightest hint of originality.
I agree that age doesn't mean experience. I work in a company with a small development team and we're all around the same age group with no young programmers.
In thios group we have a programmer in his early 40s, who has been programming for years, that I would still consider a "junior" programmer. He can usually perform a task but needs more guidance and checking than the other members if the team, most of which have been there time.
On the other side we also had a developer work there a few years back that was in his late 50s and wrote some of the worse looking code I've ever seen. Fortunatly he didn't last long and none of his code is running in production.
The 50-something who had been in the same role 37 years, who was laid off and when hired into a new company, just couldn't adapt. The 40-something who's fine to do maintenance and the odd enhancement, but you wouldn't trust to do a major refactor or a large chunk of green-field.
Some people just aren't as able, find themselves a comfort zone and stick to it.
So even experience doesn't necessarily mean experience :)
(This isn't to say such people aren't useful, that's a different measure.)
I agree with this. I've known some people who've been coding for 15-20 years who just coast, and end up in senior positions because of how long they've been employed when their skills are really at the level of a 2 year junior.
It's even worse in front-end, because that means their skills are that of a 2 year junior 10 years ago, they don't have any of the skills with new frameworks and techniques. If you want sloppy bootstrap laden html/css and jQuery, they're your person though.
Out of interest what skills are you talking about? I am an old fart by IT standards and i have come to realise that being able to come up with an algorithm randomly is not such a useful skill considering how infrequently we actually have to do it. Being able to say "no" and getting on with other developers are far more valuable than being able to solve a soduku puzzle once you have acquired a certain level of knowledge. After that it should be about managing complexity.
>i have come to realise that being able to come up with an algorithm randomly is not such a useful skill considering how infrequently we actually have to do it.
This isn't what I'm talking about - I'm talking domain specific skill. If you're a frontend developer for instance, you should understand the new language features of JavaScript that came out in 2015-2016 and be able to use them. You should know how to use flexbox for CSS instead of floats for layout (IMO)
If you're a senior developer still writing code every day, you have to keep up with that stuff. You're ultimately responsible for the codebase in a way that junior/mid devs are not and if you don't understand large aspects of how it works you can't be effective.
It doesn't help that people who stagnate like this usually weren't very good at/engaged with their jobs to begin with. They're usually senior in name only, where their managers know not to actually assign them stuff that matters.
Do new languages, new language features, new frameworks, new methodologies necessarily add business value? Or are they sometimes just being used because they are new and shiny? Requiring people to learn new tools simply because they are new (to keep up, in your words) just puts everyone on a treadmill. One distinguishing attribute I'd expect from a "senior" developer is the ability to quickly evaluate new technologies, choose and focus on what actually provides business value and have the discipline to ignore the rest. It is possible to be a top performing, highly productive developer today by very effectively using 10 year old technologies.
>Do new languages, new language features, new frameworks, new methodologies necessarily add business value?
Sometimes they do, sometimes they don't. But the tech stack will eventually come to include these new things whether you like it or not. And even if somehow you're able to stop that from happening, eventually it will become very difficult to hire junior developers with experience in old frameworks and eventually those frameworks stop getting supported and community contributions since so many other shops drop it.
The two things I listed specifically are actual fundamental changes to CSS and JavaScript, not fly by night frameworks and they both do add a lot of value.
>Requiring people to learn new tools simply because they are new (to keep up, in your words) just puts everyone on a treadmill.
I actually do sympathize with this way of thinking, and I used to agree with it. But I've come to understand how misguided it really is. Every well paying knowledge worker profession requires updating your skills from time to time. Doctors have to learn new guidelines for treatment and different ways to do surgeries/procedures. Lawyers have to brush up on developments in case law etc...
Programming already has an extreme advantage over a lot of these professions. There's no licensing board, you don't even need a degree if you can pass the interview. It's silly to think that we should also be immune to honing our craft and changing with the times.
If you're a software engineer, you should have the expectation that you're going to need to do a little bit of career development every year. I don't mean spending 10+ hours a week on it, but to use the examples above... I realized in like 2016 that I didn't know Flexbox and that was pretty much the new standard for how layout was going to happen in CSS. So I made myself learn it. And not only does that update my skills, Flexbox is actually way better to lay elements out on a page with than the old way. It's made building html/css views out much quicker.
But there are other considerations. The constraint solver in flexbox used to be a lot slower to render than other layout methods. And if you have experience in non-web UI frameworks, you can see that flexbox isn't that different from things you can find in Swing and Qt, so you can just wait a while and learn it in a day when you need it.
I've seen nigh unmaintainable web apps that were entirely laid out in really overwrought nested flexboxes, when using utterly basic HTML elements like <p> and <h3> and <dl> with a bit of CSS would have rendered faster, been developed faster, worked in older browsers, and worked better with accessibility tech.
So really, flexbox is just another new old thing, and what matters is the concept of constraint-based layout as one tool out of many. If it's the first new thing you encounter in your career, it's worth learning it because it's there, but remember that it's just another spoke on an ever-turning wheel.
Often times none of that is tested during the interview though, so you don't know whether the person is knowledgable on any of those or other frontend specifics until you've hired them because they were able to pass an leetcode DS&A phonescreen and 3-4 leetcode DS&A onsite rounds.
I've noticed this is changing (spearheaded by FAANG & other top tech companies it seems) - there are frontend specific tracks that test more JavaScript knowledge than leetcode puzzles.
But as usual, if there is a cutting edge, there are many more laggards, and I'd say many companies are still doing leetcode DS&A for frontend too.
It really just depends on where you're interviewing. A lot of places that hire software engineers don't ask algorithmic questions at all.
It's really impossible to generalize about most aspects of this career in my opinion. There are no hard rules about anything, even hiring.
I think about this in relation to the "skill gap" in programming a lot. There are people who work in very senior positions in software development who couldn't do a LeetCode easy or a FizzBuzz. They don't read articles about how to get better at programming or about concepts like DRY etc... but maybe they do the very specific thing their employer wants well enough, that combined with their long tenure and relationships with other people they are pretty much lifers.
That's why I laugh when I see articles on Hacker News articles where it says if you don't do X Y or Z you're not a "real" programmer. Meanwhile a huge swath of people employed as programmers haven't even heard of a lot of this stuff, much less actually used it.
"where" is the key word here I think. But at least in the vicinity of the major US coastal cities (SFBA, Seattle, NYC, probably LA and a few others) where a lot of tech jobs are concentrated, I would say most companies are in fact, leetcoding candidates to some degree.
Half my team is in London (big bank), as is my manager, and I know they leetcode candidates there too - and we're not even a tech company or elite financial firm, just a boring big conservative bank.
If you're looking for a job in this day and age, I think it's safer to assume you're going to get leetcoded and be prepared. Rather than try to find the rarer and rarer company that doesn't leetcode you.
> If you're a senior developer still writing code every day, you have to keep up with that stuff.
Can you give me an example of a task that I couldn't have done with ten year old tech?
Will it automatically be better because I am using this years chosen framework (or will it be more likely that there are unforeseen problems because of not using a mature technology).
Not the OP but I largely look for experience with CI/CD workflows and release management, unit/integration/systems testing that doesn't cripple R&D, QA, distributed systems beyond a webserver & DB, and just general "philosophy of software engineering" type stuff like an understanding of the factors that led to the evolution from monolith to microservice, agile/scrum/management fad of the week, and how to make tech tradeoffs based on business goals. These are largely soft skills, not algorithmic trivia.
Unfortunately, there's no one size fits all worksheet for those soft skills that HR can hand out to interviewers, so everyone gets lazy and falls back to the whiteboard BS.
Frontend dev is a really good example of a field that is all short strings and missing the longer trends. It doesn't matter what framework you use, or no framework at all, if the site can be maintained, the site is fast, and it looks good in all the browsers you are targeting. Bootstrap, jQuery, React, Vue, etc can all be sloppy, or they can be used incisively.
Yes it's not about valuing age, but it is about valuing experience. Sometimes raw talent and problem solving ability is the most important quality to look for, but sometimes experience in the field is incredibly relevant and valuable, and can't be replaced by raw talent.
If you're young, you fit in a narrow spectrum of experience with a relatively low upper bound.
If you're old, then the spectrum of possible experience is much broader. You could range from being totally inexperienced to being extremely experienced.
Also, as I get older, I realize that talent plays a significant part (which is independent of age). But there is a huge problem that almost all companies don't know how to identify technical talent; they focus on the wrong attributes like ability to perform under pressure and ability to recall details. Companies should be focusing on a candidate's ability to synthesize information, to rank problems based on their importance and to communicate simply and clearly; that is the real valuable talent.
I agree with you as far as these general skills, but I think that the value of applicable experience should not be discounted.
For example, I work with a firmware developer in his 50’s, and he is so efficient it is scary. He’s basically seen it all by now, and he has deeply ingrained work habits which let him solve problems extremely quickly.
That’s not to say that every developer with X years of experience will automatically be great in that way, but if you can find someone who has years of experience producing exactly the type of work you need, this should be viewed as a huge advantage.
It’s like if you are looking to hire a carpenter to built a piece of custom furniture: you can and should consider general qualities like strength, manual dexterity and attention to detail. But if you can find someone with years of muscle memory building exactly that type of furniture, they are almost guaranteed to get the best result.
I think that on average, an older experienced developer will be more skilled than a younger one, even with less natural talent. Experience is really important.
>Companies should be focusing on a candidate's ability to synthesize information, to rank problems based on their importance and to communicate simply and clearly
So, from my non-tech perspective, that's just every single job at every single employer. Any position I've had that has been in charge of hiring people - I am acutely aware that I can teach the position. I can teach the technical aspects and detail knowledge of how to do a job. What I can't teach is the ability to prioritize, critically think, and communicate appropriately. Those are real talents.
I guess what I'm saying is - tech companies are garbage at evaluating for those things, because (I would argue) all/nearly all companies are garbage at evaluating for those things. It doesn't matter if it's a tech company, garbage company, or insurance company - they're all (again, my experience) equally garbage at finding those skills in people through their standard interview processes.
If you figure out how to implement a standard interview process to evaluate for the three things you list, you will be the world's first trillionaire.
If the current trend in the software engineering market continues, in 2040 we'll most likely have much more profiles with 10x2 years or 20x1 year than 1x20 years. I have the impression that really few companies value the latter nowadays.
Ten years of experience in one tech stack will give you mastery. 1 year changes with a new language / framework / tools every time will mean you are forever a novice.
If you know 10 different tech stacks you probably have immense breadth and can learn/do new things easily. That’s is incredibly valuable. There is a massive difference when you are learning a new stack for 11th time vs 2nd or 3rd time. Sometimes you need people with deep mastery of some specific, sometimes you need people who have seen everything, can learn anything, and synthesize a broad view.
That is not entirely true. After learning the n-th stack and knowing the knowledge will likely sputter out in a couple of years cycles, one is likely to learn enough to get done what needs to be done. As someone points out, few out there are true experts if they're constantly learning a new stack. And also people who want to learn all the time do get bored of learning the same thing, a lot of them branch out in different directions that are a lot less shapeshifting.
Ideally I preferred learning the technology for a few years then steeping back and reaping the benefits and do stuff with it. But, there is a fear that not keeping up leads to obsolescence on the job market so we're in this constant cycle of learning new things with diminishing returns
First, I accept that there are cases where extreme specialization is important. I just think these cases are few and far between.
The main skill for senior/staff swe is not in knowing or not knowing some technology, but rather
1. Picking things up quickly
2. Understanding/recognizing generalizable patterns and best practices in any tech.
Almost no job requires some extreme expert level knowledge of language/framework minutiae (obviously you do need some experience in particular tech). Almost every non-greenfield job has a ton of custom tech that has to be learned. Almost every green-field job needs someone with immense breadth.
Having been exposed to very many things people get to naturally see recurring patterns/best practices. When they see a new tech they are able to understand why choices were made. When new tech needs to be made they can draw from broad experience in what others did best. I really think this is most of the eng value of staff swe.
> There is an assumption that "old" means "experienced" which is not necessarily true.
Agreed
> They should be judged by their skills, the same way the young ones are being judged.
Disagree, but this is my subjective opinion. Old people in general are not able to keep up with young people for many reasons. My personal stance is we shouldn't compare them as equals, but rather try to get the most out of everyone. The baseline for judging someone (to be hired or to stay in a job) should depend on the effort they put in. Actual performance should be taken into account when deciding on promotions and bonuses. This would create an environment safe for old people to keep their jobs until they retire, but also fair to people of any age to get higher salaries and roles based on their skills.
It makes me happy to see folks not dogpiling on you. I'll try to be similarly gentle.
It seems to me maybe you've got the hero mentality. That sacrifice == hours spent grinding == commitment to the job. "Effort" to you is a measurement of love.
That totally described me in my 20's and 30's. One day, after acquiring the necessary skills, I woke up and learned that it didn't make sense to grind like that anymore. Also, long hours is bad for the mental health.
I switched to "work smarter, not harder" mode and learned about TDD and Agile. I got better at my craft and earned respect from my peers for encouraging them to get better too.
Finally, I had a really eye-opening experience with an older developer a few years ago. He was a contractor and owned a gym. He always took off at 3pm to go train his folks and work his other business without asking/telling anyone. He was gruff and a little bit intimidating physically. He even had terrible typing skills: he typed with two fingers like a kid! Not sure if that was a dexterity handicap or if he just never learned to type, probably the former, but it still pissed me off because he was SOOO SLOW!
It surprised me a bit when he rolled off that he lasted his whole contract and didn't wash out sooner. It surprised me at the time he had such deep networks in the company: VP's knew him and worked with him years earlier and they had lively random conversations. It surprises me now that his contributions to the codebase have endured - he just got a lot done in less lines of code. I think fondly now at the conversations we had, not just on coding, but on parenting, politics, physical health and strength training, military service, and just ... diverse, weird thoughts from my parent's generation.
Think of yourself and your beloved company as The Borg. Your job is to incorporate the technical distinctiveness of aliens into your collective. If people you run across are turds, and you have a culture of hard-charging success, they'll wash out pretty quickly. Just let go of effort and efficiency as metrics - beancounters can concern themselves with that. Focus on winning over the long term.
Hey mate. Sorry for the missunderstanding. When I said effort I meant just ensuring we're not dealing with a lazy person who slacks having the safety of a job. By no means do I want people competing by putting in more hours.
I have learned that putting in more hours does not result in the benefits one would hope for very early on.
Thanks for your comment though, it made me understand why everyone is super freaking pissed about my comment. Have I understood that earlier I would have edited it to avoid all this mess
Firstly I am all in for discrimination that results in positive effects for people in need (as long as it is reasonable and not blind benefits for the sake of suposedly helping minorities)
However, my solution does not judge people differently based on age. It judges everyone in a way that allows old people to fairly compete with young people on the basis of effort put in rather than pure skills/performance output.
> However, my solution does not judge people differently based on age. It judges everyone in a way that allows old people to fairly compete with young people on the basis of effort put in rather than pure skills/performance output.
Fairly compete in...making an effort? Would you like to get operated on by a surgeon that lacks the skill necessary to perform his job but tries damn hard? Would you want to go to a concert where the string section has been working real hard on learning the material but can't read sheet music and are tone deaf? Would you want to ride the bus where the driver has been practicing all his life but can't quite drive well enough to meet a reasonable standard based on skill? Do you want to follow a basketball tournament where the winner is determined by their effort rather than their score? Why should anyone respect a workplace that is effectively a daycare for try-hards? How can such a workplace result in a good product or service?
My most favorable take on your idea is that you simply can't have thought it through.
Frankly, the whole premise that old people need to be "allowed to compete fairly" by arbitrarily judging workers on some other merit than the quality of their work strikes me as incredibly patronizing. It's this attitude that leads to age discrimination in the first place, not that old people somehow can't produce quality work.
Instead of fucking up the workplace by introducing perverse incentives, consider socialized efforts that can maintain a sense of security for people that for one reason or another can't perform the work available to them to a reasonable standard. Pensions, unemployment insurance, disability insurance...that kind of thing.
> Would you like to get operated on by a surgeon that lacks the skill necessary to perform his job but tries damn hard? [...] Would you want to ride the bus where the driver has been practicing all his life but can't quite drive well enough to meet a reasonable standard based on skill?
Breaking news, this is already happening. Actually, pure performance-based judgement drives incapable people to hide their inability to perform the task at hand leading to bad results (instead of facing the reality in a safe environment that would help them get better at what they do).
> Why should anyone respect a workplace that is effectively a daycare for try-hards? How can such a workplace result in a good product or service?
This sentence assumes the majority of workers are incapable try-hards which is plain false. A few less-skilled try-hards aren't going to ruin any service. Again, this is already the case in the world we live in.
> Frankly, the whole premise that old people need to be "allowed to compete fairly" by arbitrarily judging workers on some other merit than the quality of their work strikes me as incredibly patronizing.
I clearly stated two layers of judgement. A baseline for effort and a second layer for career progression based on lerformance, both age-agnostic. Never proposed a system discriminating against age. Get over it please.
> consider socialized efforts that can maintain a sense of security for people that for one reason or another can't perform the work available to them to a reasonable standard. Pensions, unemployment insurance, disability insurance...that kind of thing.
Ok so I proposed a system that would ensure someone has a job as long as they are not a lazy-a$$ (but probably wouldn't be able to climb up the career ladder if they don't have the skills), always age-agnostic, but that is somehow not a socialized effort to provide security? Ok.
We are talking about technical abilities...if you discriminate outside of that, including age, I would whole heartedly call that the bad kind of discrimination.
I am really not sure why my point isn't clear. I am not judging people differently based on age. I observe that skills/performance is not a fair metric for old people to compare against young ones. Thus, in order to create a field of fair competition I argue that for the baseline desicisons (getting and keeping a job) people of all ages should be judged on the effort they put in, a fair metric for all ages. Then comes the second layer of judgement based on skills which decides who gets promoted or gets performance bonuses. Old people would have less chances to win in this layer of judgement but at least they can keep their jobs safe as long as they put in effort.
There's a common trope about an established worker who works really, really hard at tasks - evenings, weekends, the whole deal.
Then they retire. Someone else takes over, and they do exactly the same work in $small_percentage of the time.
So no - effort is not a good metric. Why would you be paying someone who can't do the job - or at least can't do the job well, with enough spare capacity to deal with more complex work requirements?
Of course this is a parable, but I suspect a lot of people have seen something similar happen at work at least once.
By doing that you are hurting your business by not putting the best person in that position. Also, that better person can help bring up your younger less experienced staff.
Blindly saying 'ok you are technically better, but your also old and I want young people' is what you are arguing.
Haven't actually read the study but not sure I understand your comment. Even if every village has only one birth, the probability of having no girls at random is 1/(2700) which seems practically impossible. Perhaps there is something else that made you response critically of their methods?
Over 700 villages, all with a birth rate of about two births per 3 months, there might very well be 132 villages with no girls born, 140 villages with no boys born, and 428 villages with one boy and one girl born. It wouldn't be alarming at all.
But if these villages represent a contiguous area, then it is alarming. The article doesn't really give enough information to say anything about this really.
Data is from a single district, and then blocks. So blocks are contiguous. Analogous to say New York metro area, and then Brooklyn, Manhattan, Queens etc.
He was saying that 132 villages out of 700 villages had no female babies.
> The 132 villages where no girls were born over three months have all been marked as part of a “red zone”, which means local data will be scrutinised more closely and health workers have been asked to be vigilant.
The "red zone" is not a single area, it's a black list.
There's just not enough information in the article for any of us to know. If every village had one birth, you'd expect half of them to have had 'only girls'. It's hard to believe it's that simple a case though. Some of the villages may have had a dozen or more births, we just don't know.
Seismologist here. So far, no reliable predictor of main shocks (largest earthquake in a sequence) has been found. People have been looking for them ever since the invention of the modern seismometer networks in the 60s.
Most of the EM based earthquake prediction is based on really loose reasoning. In summary, it goes like this: "earthquakes can generate EM fields through piezo-electric effect, therefore small movements before big earthquakes should generate small EM fields we can measure". But there is almost never no such thing as "small movements before big earthquakes", which is why reliably predicting them has been impossible so far.
Most likely, this will turn out to be an example of confirmation bias. In the unlikely event that it is not, people will be all over this.
To be fair, there are some well documented EM precursors. The most classic is from the 1989 Loma Prieta earthquake: https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/GL01... Recently, some fairly convincing ionospheric-related (i.e. GPS delay) precursors have also been observed before Tohoku and (less convincingly) several other major earthquakes. (e.g. Heki, 2011, Iwata & Umeno 2016, and several other references I forget. Mostly from the same couple research groups.)
We don't have a full mechanism to explain them (or, rather, there are a lot of competing mechanisms that don't fully explain things). More importantly, precursors don't seem to occur consistently as you noted.
However, they're also not worth automatically dismissing. The idea that they're all a simple case of confirmation bias has been extensively discussed, and while it's not currently possible to refute, it's starting to seem less likely. There's certainly been a lot more attention given to possible precursors and mechanisms in the last 5 years than there were before. It's a serious avenue of research right now. Keep an eye out then next time you're at AGU. I guarantee you you'll see at least a few posters around possible precursors and/or precursor mechanisms.
Forecasting is definitely pseudoscience, but EM-related precursors are a fairly hot (and controversial) research topic at the moment.
> Forecasting is definitely pseudoscience, but EM-related precursors are a fairly hot (and controversial) research topic at the moment.
It's not pseudoscience because it can be disproven--which is generally what happens.
The issue that everybody forgets is that predictions have FOUR outcomes, not two.
You have the one everybody remembers: "I predicted X and X happened".
You have the one some people remember: "I didn't predict X and X didn't happen."
You have the one that people rarely remember: "I predicted X and X didn't happen."
You have the one nobody remembers: "I didn't predict X, but X happened."
The problem is that for rare events, the "predict X and not X" and "didn't predict X but X" have to be REALLY low probability for a measure to be useful.