Hacker News new | past | comments | ask | show | jobs | submit | ath0's comments login

The name is a play on the author's last name (Josh Wardle): https://www.nytimes.com/2022/01/03/technology/wordle-word-ga...


If you read nothing else, the commit message adding JIT support is worth your time: https://github.com/python/cpython/pull/113465


To the guy who said the pull was horrible:

IMO, the most important thing about a pull request is to... actually be productive. I've worked with people in the past who would nit-pick my commit messages wanting me to waste hours of my time trying to use arcane Git commands. Any of which might and probably will clobber my work.

If you're doing Git reviews a good use of resources is to look for security problems, performance issues, and general engineering problems with someone's code, etc.

A BAD use of Git reviews is to spend the time making stylistic comments 'I would have written it like this, it looks much better', being overly pedantic about code formatting, or forcing your OCD on someone for their Git history. The reason people hate code reviews is somehow they're always done by this latter group of person.

If you're forcing people to waste time for silly reasons then you can count that they'll eventually leave your silly company. I know I have. I was once about to work at a company but saw that they literally used a committee of people to review every commit message wherein they would force every engineer to rewrite code for reasons that seemed to make astrology seem like a hard science. Consider not doing that.


A good reason for enforcing some rule on git commit messages though is to have a message that helps when bisecting to find where an issue was introduced in the past. If the commit message is too meanless or too different from whatever is the convention, it makes you spend more time when tracking and solving issues.


If you’ve a test checking the regression you can use that directly to find where the regression occurred without relying on the message, no?


No, the message should tell the why of the change, if you don't know you risk doing a fix that is simply removing the change and then you are back at the issue that motivate such change in first place. This is even harder if the person put the commit along others that are merged separately or worse, work directly on the main/master branch. I've been there a lot, it's not hard to write a good commit message, if you're in a project and everyone does their part except that one programmer, you will notice a lot every time a bug turns out to be introduced by that one person.


The reason one should write commit messages in the common form is to shave off a few seconds for every reviewer to immediately understand what is going on.

It's not unreasonable (within the limits of sanity, of course), it's just a matter of respecting other people's time.


If someone who made the changes can't explain things clearly, I'd even wager that they didn't understand what they're changing properly. A good commit message is needed as the author might be gone but software is maintained for decades. Productivity is easy, writing a good message isn't, as the latter seems to be hard for many and it's telling of their understanding of systems they change.


Many teams will just use squash merges to avoid wasting time on commit messages and end up with just one commit to the main branch and cleaner history.


When I'm having to wade back through previous commits trying to debug a problem, there's nothing I hate more than finding someone's large squashed commit. Instead of using automated bisect tooling, I have to try and understand what they did and untangle it. This might be an area of the codebase I'm unfamiliar with and it'll take even longer. Compared to this, writing a few extra commit messages costs nothing. I really don't get it.


That's like returning to publishing tarballs instead of having single, revertable commits.


`git rebase -i develop` is arcane? An editor opens where you put an 'r' in front of every commit that needs rewording. Save and close, and Git will successively open an editor for every commit message. Force Push and you're done.

If you're not willing to bother with command line: the Git client in Jetbrains also lets you edit commit messages in a very straightforward way.


Force push can be scary to some. Some nitpickers may also ask you to squash/reorder/split commits to satisfy their OCD.


I am one of those who would insist on correctly splitting commits. Commits are a communication tool, and a well-crafted series of commits makes reviewing pull requests, which is a chore for most people, a lot easier. Months or years later, it is still important to easily review changes.

In a Gitflow repository I don't particularly like squashing feature branches since a sequence of commits allows the author to better document what they were thinking about. Squashing is fine in Trunk-based development since feature branches are usually less extensive.

I hate intermediary merges on feature branches because they tend to make up a large portion of the branch history and are harder to review than normal commits.


Those nitpickers are probably people who have tried to track a regression in a big codebase before. It's not OCD, it's just useful version control.


Wow I underestimated how good it would be


[flagged]


I'm certain it was.


I guess you didn't realize it's a riff on an existing poem.

https://www.poetryfoundation.org/poems/43171/a-visit-from-st...


Yeah, I'm getting old. I would prefer TLDR formal version and then the fun xmas version for people that feel festive. But hey I'm not paying for this so who am I to complain :)


[flagged]


Yeah, ChatGPT can't reliably generate poetry which scans. (It can't count syllables in words, so this would be a bit much to expect of it.) I haven't read the entire poem, but the bits I have seen at least have the right syllable counts (although some rather odd stress patterns).


Why do you suspect that? I really doubt it, because it's actually good. Do you think that nobody is able to write good poetry without help from machine? Like... I've written stuff like that in my life. It's not even particularly hard, if you believe you can do it and workshop it for a while.


No way an LLM came up with "Now cache away, cache away, cache away all!"

https://chat.openai.com/share/9f701679-43b1-45b7-9f70-9b1c26...


I'm certain the poetry wasn't GPT generated.


400+ commits with a ton of them being "catch up to main"? Wow, I guess even good programmers don't know how to rebase/use git properly.


Do you want ferret Doctor Manhattan? Because that’s how you get ferret Doctor Manhattan.


If by any chance you’re the graphic designer who picked South Williamsburg hotspots for the iOS preview screenshots… well, hi neighbor! Good choices.


“Position in band” is often a factor in performance reviews. Consider a common process - say you’re “senior level” and in your review you’ll be rated on six categories, then given a raise based on where you are overall relative to “meets expectations.”

If you’re senior but at the bottom of the senior band, and you’re mostly “at expectations” for your level but maybe “slightly below” in 1-2 categories, you’ll probably still net out at “meets” with a normal raise. If you’re senior but the highest paid senior - that’s probably going to net out at “below” overall, or zero-to-small raise.


Such systems do exist in smaller companies where leadership knows everyone and their contributions, but I’ve never seen it work this way in big companies.


Counterpoint from Josh Sokol, former OWASP board member: https://www.linkedin.com/feed/update/urn:li:activity:7031305...

The OWASP nonprofit isn’t like the well-funded Linux Foundation; it runs on a shoestring budget made worse by the loss of conference revenue during the pandemic. OWASP charters events, local meetups, training content and OSS projects - the authors of this memo focus only on the OSS project needs. The OWASP board sees itself as community first and foremost; projects should seek their own sponsorships.


If OWASP wants to focus on chapters and events, why do they have projects under their umbrella at all? We had a similar problem in the .NET ecosystem with the .NET Foundation. It turned out they don't really do that much for the projects they oversee after all, so what's the point? Why be part of an organization that isn't providing the support you need?

Perhaps, indeed, they should not be. Given this response, it sounds to me like the projects should leave. What they need is simply different than what OWASP wants or is financially able to provide. The projects have outgrown the organization, and the organization doesn't see itself as being primarily about the projects. Sounds, to me, like it's time to make a clean break that unburdens OWASP and frees the projects.


The projects should leave. I don't think they are a critical component of OWASP compared to the educational material provided through their documentation and conferences.


Two of the major projects in the list of cosigners on this are the OWASP Top 10 project and ASVS, which are the two big educational projects at OWASP.

I don't especially love either of those projects, but they're arguably the two most important things OWASP works on outside of the conferences. The Top 10 project can't really leave OWASP (ASVS could).

ZAP is the only other project there that I think is all that important to the identity of OWASP itself, but it should just go find its own sponsorship anyways. People like ZAP, but the industry standard is Burp Suite; Burp is Microsoft Office to ZAP's... LibreOffice? Like all the software freedom stuff aside, if you're a professional, you use Word.


Even OWASP Top 10 often seems to be most interesting in the vein of "That thing that was a problem 10 years ago? Yep still a problem." That's a bit unfair. Stuff does move around a bit over time and some new categories come in. But it often mostly seems to document how relatively little things change.


I don't think the OWASP Top 10 is especially good, and in general think it mostly serves as a tool to raise the salience of application security, rather than as a guide to implementing it. It almost doesn't matter what the Top 10 is.


Back when I was attending DevOps Days fairly regularly that's pretty consistent with how I saw the OWASP Top 10 being used--to highlight security in general as opposed to any specific categories.


Well, there are a lot of legacy applications out there.


Josh Sokol would appear to agree. A response on his LinkedIn post:

> Honestly, if they can get $5-10M from "somewhere else", I say go for it. Then maybe the Foundation resources can be hyper focused on catering to Chapters and Events.


Lots of stuff in the comments already about dual-track and external career options.

I'd add: can you hire a deputy who enjoys doing this?

* Ask for the next project management hire to be a true TPM, and try to pull someone who's working on something boring but is really interested in getting into your area of technology. Have them run all the Jira, summarization, execution of planning processes, comms & coordination. You'll still need to own hiring, still need to do a lot of people development work and still have to get involved in marketing & product strategy - but might get some leverage in internal pieces you don't like.

* Promote or hire another person who shows interest and aptitude in management, with the explicit plan for them to be your deputy and to take on some of the "tasks you don't really enjoy," especially if scoped to a specific area. Can you structure your work so someone else is responsible for 50% (or even more) of it? Can they do the RFP first-pass reviews, and even be responsible for pushing back on things in their area of responsibility? Can they manage some of your directs, and co-lead some of the above processes within their scope? There might be someone on the team who feels underutilized, wants more visibility, has strong relationships and is willing to learn. And if there's not - that's signal for you and your hiring process that maybe you've only hired folks like you; it may be time to have a broader set of folks on the team.


Several of the concerns revolve around the complexity of managing multiple IDs (product SKUs, prices, etc) for a single subscription - and the author reaches the conclusion that this is because it's optimized for "e-commerce and not B2B SaaS".

While from the buyer's perspective, "single negotiated cost with overages" may appear simple - it's a single bill after all - on the accounting side for the company selling the product, I'd expect it's much more complicated; with potentially different tax rates for different products and complexity around producing an auditor-defensible determination for "cost of goods sold" and "marketing expense."

So for at least some of these requests, I see Stripe's posture here as helpful - it's not "requesting a dollar figure", it's "creating a detailed enough accounting trail behind that bill to operate your business." Looking across the breadth of Stripe's products, I'd give them the benefit of the doubt here.


Her speech early in the Covid-19 era was one for the ages[1]: Short, personal, reflective of history yet with a clear call-to-action for her country. I'm not British and also found it exceptional.

[1] https://www.reuters.com/article/health-coronavirus-britain-q...


"It reminds me of the very first broadcast I made, in 1940, helped by my sister. We, as children, spoke from here at Windsor to children who had been evacuated from their homes and sent away for their own safety."

This is the broadcast she was referring to: https://youtu.be/VJI9LPFQth4


I'm long past my academia phase, but recently led the PC for an industry conference (accept rate: ~15%).

1. Curation is important both for the physical limits (venues only fit a certain number of people), attention limits (attendees will usually retain only a handful of "nuggets" no matter how packed the agenda is) and interaction limits (you can't meet everyone at a large conference).

2. If the goal of a conference is not just to "stamp" research as somehow "approved", but to encourage discovery and knowledge exchange that deepens a specific area, it's important to apply that curation filter with an eye toward best advancing the goals of the conference. That means not just going for things that are okay, but those that best resonate with other presentations / attendees / research topics.

3. While the size of any one conference has to be fixed, tech has made it infinitely easier to create new conferences and journals with other focus areas. They may not start with the prestige of a larger journal, but if the papers published start to have an impact, it can catalyze an entire subfield of work.

Some conferences can be tied exclusively to "novelty" - ACM academic conferences - but others to "incremental advancements" - the bigger industry conferences in security, like Usenix Security and some to "best explaining ideas" - like Enigma.

There are new ways to find an audience for your work and create impact - that's part of the job now.


I'm well-known in a research community. I'm positioned such that I don't need more academic points. I've mostly stopped publishing in branded prestige academic venues, in part due to rejection rates.

My goal in doing work and writing papers is to see them disseminated. The acceptance/rejection process is asinine -- studies show it's basically random. I've had one paper in my whole career where the reviewers did a proper review (e.g. worked through the math). The rest were quick skims. Comments often show the reviewers never read the paper. The stuff that makes it through this process is often nonsense, while very high-quality work is often cut.

The very best paper I wrote in my career has never seen the light of day. It was shortened to a 4-page work-in-progress because a reviewer didn't read it (the feedback was literally nonsense: that the sample size was small enough to be anecdotal; I had the largest sample size in the history of the research field).

The only impact of this egoistical search for prestige-by-low-accept-rates is that people who have better things to do with their time leave, and that research dissemination is slowed.

Those excuses make little sense in the real world:

1) If your conference has a 10% accept rate, it's easy enough to book a bigger venue next year. I've been to conferences with dozens of people, and ones with tens of thousands. It all works well. Bigger ones work better, if anything.

2) PCs aren't thoughtful enough to do that well, and even so, the goal of a conference shouldn't be to select things which resonate with the entrenched PC. That's why many ideas need to wait for a generation of old, conservative professors to die to make it out there.

3) The whole obsession with prestige is stupid and misguided.

Journals and conferences ought to have quality bars. Are there typos and grammar errors? Were there clear IRB ethic violations? Did you use error bars on your plots? Was data fabricated? Is the research methodologically sound? Is it coherent and readable? And so on. If it passes those bars, it should be published. If no one reads it / attends a talk, that's okay too -- importance can and should be determined after-the-fact.


I may have been radicalized during my short time in the academic world, but IMO, conferences are a really bad setting to disseminate new ideas. They just don't favor it. In practice, you have people preaching their ideas, a lot of people not listening, and a few misunderstanding. Nobody else.

Spreading ideas is better done on paper, with guided discussion, and without time limits. Or, in other worlds, on something like paper-split hierarchical internet forums.

Conferences can be useful to discuss and work over known ideas. For that, they should always bring papers that are already published, and had some community attention. The idea of debuting new ideas over unprepared people is antagonistic to that goal.


> Comments often show the reviewers never read the paper.

This. I was not in computer science, but in a different technical field, and this is sadly common. We would often have to appeal to the editor with "The topic the reviewer said we didn't address? It's in Section X. Get us another reviewer."


The biggest mystery in the whole thing is why someone who is volunteering to review papers anonymously would bother to do it badly when they could simply not do it at all.


No mystery. Behavior converges to incentives:

* You do get academic points for chairing a conference, and as a chair, you do need to find reviewers.

* A colleague is running a conference, and asks you to do a favor. You want to help your colleague. Reviewing papers wins you points with them, and declining to review burns bridges. When you're running a conference, you'd like them to reciprocate. Plus, they might be on a grant / hiring / etc. board / committee / etc. later on. Burning bridges in academia is very bad.

On the other hand, there is no incentive to invest more than 30-600 seconds per review. Neither you nor your friend really have any reason to care about the quality of the conference.

As this process repeats, people put in less and less time each time around, since it doesn't matter. The process converges to random noise.


>Reviewing papers wins you points with them, and declining to review burns bridges. When you're running a conference, you'd like them to reciprocate.

Surely they'd get upset if you rejected all of the good papers, thereby ensuring that they would have a bad conference.


Why would they care? They get their name as the chair of the conference on their CV. No one remembers who chaired a specific year or how a given year went. If they did, there's enough noise in the process it wouldn't be attributed to the review process in particular. Some years, weaker papers come in, and others, stronger ones do.


Just accept people who has held a lot of conference talks before and it will be fine. That is the fastest and easiest way to review, so unless there is pressure to do things differently that is how most will do it.

If there is space still left at the end you can look at the others and take the first paper that looks fine until there are no spots left.


There is (for good reason) more focus today on diversity--broadly defined e.g. new speakers--for non-academic conferences these days. However, there were quite a few conferences in the tech sector historically that tended to have a core of "the usual suspects" with others grabbing a smaller number of leftover slots. TBH, I probably benefited from this over the years. (Conferences run by companies follow somewhat different rules but still usually have a stable of Top Rated Speakers who tend to get slots.)


Because they want to appear as if they are an active participant in the community.


I think this happens in all fields. It’s probably a professor on a PC dumping the review on an unsuspecting and overworked PhD student or MS student who really doesn’t care and just wants to get some sleep.

And yes - say what one may - PhD students are overworked and underpaid at least in most of the US.


I've learned to address those ones diplomatically with "the topic you mention is now included in section X". Technically true, and lower friction.


Haha yes. Everyday example of this frustration (really happened):

"So when is their wedding?"

'Next week on Saturday.'

"Whoa whoa whoa, do you mean this coming Saturday, or the Saturday that happens next week?"

'Next week on Saturday.'

"Okay, gotcha, thanks, it was kinda unclear before."


studies show it's basically random

The "basically" is important though, because there are some nuances to it.

However, the point I've actually come here to make is that since publications are a strong factor for your career progress in academia, a corollary of the above is that making it in academia is basically random, too. Which is also true for other reasons, though: for every open professor position in a certain field, there are usually a number of candidates that are all equally highly qualified. But only one of them can get the gig. If the selection is not random, then it's typically based on other factors, such as, how well you are connected, your gender, whether some other professor at the faculty fears competition from you, etc. -- which may not be random, but is equally out of your control in all but a few cases.


My experience is that for elite schools -- Stanford and MIT -- the remaining factor is how much one is willing to cheat. There is a random component, a merit-based component, but most (and I have large n here) successful affiliated faculty candidates did so by cheating in some way.

That can be data baking, credit theft, or a whole slew of other techniques, but at least in my department, most new faculty at least at these two schools are in some way crooked.

Also, for nuance on random:

http://blog.mrtz.org/2014/12/15/the-nips-experiment.html



From the article:

"99.99% of us are honest but the dishonest 0.01% can cause serious, repeated damage."

My experience is that this is much more like a 50/50 split at elite schools, at least when you look at people who succeed at making it to faculty positions. BMJ estimates 20% of publications are based on fabricated data:

https://blogs.bmj.com/bmj/2021/07/05/time-to-assume-that-hea...

That sounds about right for what I've seen at MIT. Note that 20% of publications being based on fabricated data is in-line with my 50/50 split figure. Researchers who cheat only do it part of the time, and often in ways which don't involve direct data fabrication. Critically, the numbers go up significantly for high-impact publications -- they types that make the news and make scientific careers. By the time MIT's PR machine picks up a publication, and the press picks up from there, the odds of it being fraudulent are much higher than 50/50.


> Comments often show the reviewers never read the paper.

And when they do, it's not sure they understood it or even put the slightest towards understanding. I've a rejected paper where one of the comment was that the header of a table featuring 4 columns named N, V, ADJ, ADV was "hard to understand". The table was between two paragraphs each mentioning nouns, verbs, adjectives and adverbs, in a paper mostly about dictionary...


> My goal in doing work and writing papers is to see them disseminated. The acceptance/rejection process is asinine -- studies show it's basically random. I've had one paper in my whole career where the reviewers did a proper review (e.g. worked through the math). The rest were quick skims. Comments often show the reviewers never read the paper. The stuff that makes it through this process is often nonsense, while very high-quality work is often cut.

How come this is not fixed ?


Because the leaders are the people who made their careers in the current system and they wouldn't benefit from making things more meritocratic. These are the people who argues endlessly saying meritocracy is bad for reason X or reason Y, they just want to keep their current privileges.


I'm still a bit surprised how basic human defects can reach even the 'smartest' spheres of society.


> 3. While the size of any one conference has to be fixed, tech has made it infinitely easier to create new conferences and journals with other focus areas. They may not start with the prestige of a larger journal, but if the papers published start to have an impact, it can catalyze an entire subfield of work.

Does it though? The largest conferences I go to as a CS academic have hundreds of people. There are academic areas where 10x people participate. The size limitation is a self-imposed excuse to keep acceptance low. I have been PC chair of two conferences and my attempts to expand the conference numbers were shot down by the steering committee precisely for this reason, not because we couldn't find a larger room.


Try "publishing" in https://researchers.one


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: