Getting agreement on a better scoring system for CVE's will be hard enough, assuming it's possible at all given the competing interests.
It makes a top down imposed set of technical fixes for a lot of things broken in our industry look at best like an impossible dream. If anyone claiming they have an oracle that tells you how much effort should be put into QA for any given piece of software is a bullshitter. If you let the bullshitters loose they will create quagmire of rules leading to a huge amount of busy work that mainly benefits them.
A huge amount of experimentation is required to figure out what approaches work. Granted, that experimentation isn't happening now. That's why EU's approach looks like the right one to me. Prevent vendors from shrugging off all liability to defects in their product in their licences, which gives bugs (of all sorts) potential for a serious financial bite. The severity of the bite is determined largely by the customer - did it hurt so badly perusing the vendor in the courts (perhaps via a class action) is worth it? That IMO is where the severity should be determined. Vendors and bug hunters have their own agendas that numerous examples have shown seriously compromise their ability to grade bugs. Finally it leaves the software developers free to experiment and invent their own responses. That's far better than giving handing that responsibility to bureaucrats. There are far more computer engineers out there, and their solutions will be much better at making their products reliable than forcing them to follow some universal set of rules, no matter how well intentioned those rules may be.
Paternalistic interventionism wrapped up in the usual engineering propensity to overestimate our ability to understand and solve political and human problems well outside our immediate expertise.
Currently being certified has no value, not to Open Source, not to Closed Source, since companies are clearly doing fine without it. So it's hard to see them paying extra for it.
We've had professional certifications for years. Novell, Citrix, Microsoft, SAP, Oracle, they all make money selling certification to naive users. Anyone who's bothered knows they don't really mean much.
But hey, if you think there's demand, set up a body and give it a go. Personally I think it's a waste of time, but if you can get enough companies to care, and enough developers to pay, you'll have a nice business.
So, historically the creation of bureaucracy in the US government included industry professionals to guide the requirements and a public comment period before finalization. This is done because most people in government recognize they are not up to date on the latest industry knowledge.
Destroying everything and creating a new bureaucracy is in absolutely no way better than fixing the original one on updated information.
It seems you may have fallen victim to the very well thought out "government bad!" argument.
> We know this doesn't work, and author admits as much.
Where do I admit this? About fines? Yes, fines don't work.
The difference with my proposal is that companies wouldn't lose a few days' worth of revenue to a fine, they would lose 100% of revenue. That goes from being a "cost of doing business" to an existential threat.
> Not to be rude to the author, but it sort of seems like they forgot that not all software is developed in the US.
I didn't forget. In fact, it's because of worldwide things that I keep pushing this here in the US. The EU already passed the Cybersecurity Resilience Act [1].
Sure, we may not have things apply globally, but we don't need agreement on the punishments globally. We just need agreement on the certification globally.
We have done global agreements before. ICANN, International Telecommunications Union, etc. ICANN is interesting because it started as US-only and expanded.
>Where do I admit this? About fines? Yes, fines don't work.
Yes, about fines. From your post: "Ah, yes, fines for companies are not enough. I agree."
>they would lose 100% of revenue.
We can't get the government to enforce this when tens of millions of records are leaked publicly, this absolutely will not happen for failure to report a vulnerability. If you have any idea of how to make it happen, please, lets immediately apply it to breaches and then figure out how to apply it to failure to report vulnerabilities.
>We just need agreement on the certification globally.
As far as I am aware, there is no certification (one which is legally required to obtain a job) on the planet that is globally recognized. But I would be happy to be proven wrong here.
>but we don't need agreement on the punishments globally.
Which will end up with some countries not willing to charge 100% loss of revenue, causing a mass exodus of companies from any country which does charge 100%, thus making the solution untenable.
ICANN is an interesting example, but it's not a certification. The scale (and thus administration, compliance, etc.) is very different.
Copyright reserves most rights to the author by default. And copyright laws thought about future changes.
Copyright laws (in the US) added fair use, which has four tests. Not all of the tests need to fail for fair use to disappear. Usually two are enough.
The one courts love the most is if the copy is used to create something commercial that competes with the original work.
From near the top of the article:
> I agree that the dynamic of corporations making for-profit tools using previously published material to directly compete with the original authors, especially when that work was published freely, is “bad.”
So essentially, the author admits that AI fails this test.
Thus, if authors can show the AI fails another test (and AI usually fails the substantive difference test), AI is copyright infringement. Period.
The fact that the article gives up that point so early makes me feel I would be wasting time reading more, but I will still do it.
Edit: still reading, but the author talks about enumerated rights. Most lawsuits target the distribution of model outputs because that is reproduction, an enumerated right.
Edit 2: the author talks about sunstantive differences, admits they happen aboit 2% of the time, but then seems to argue that means they are not infringing at all. No, they are infringing in those instances.
Edit 3: the author claims that model users are the infringing ones, but at least one AI company (Microsoft?) had agreed to indemnify users, so plaintiffs have full right to go after the company instead.
> If I’m hiring “senior” developers, being comfortable communicating technical topics and answering questions is a requirement.
While I agree completely, I also know plenty of people who fit this description, but would probably aren't the folks you ask to give a power point on a technical topic.
TBH I've done my time in management and done my fair share of presentations, but I would HATE this to the point that I might well opt out.
There's a reason I'm not in management anymore, and a presentation like described is a far cry from working with stakeholders and engineers to define and document technical requirements. Or even presenting those to a group with shared context.
I might well take the fact that you've made it a part of the interview process to be an indication that this is a regular job requirement as opposed to something I have to do here and there.
>The only problem is taking into account those that are not good at public speaking.
A very common concern, but overblown in my experience. If you notice, I never actually said "judge the candidate's presentation skills" (or anything of the sort) in why I think this process is great. The presenation is really just level-setting; the candidate gets to set the topic and give sufficient context for a conversation to occur. The presentation is at most the first 15 minutes out of a ~3 hour in-person interview process. That's how little it matters.
It's the Q&A and subsequent discussions that matter.
The problem is that interviewers have a strong tendency to judge candidates based on whether they come across as self-confident, even when instructed not to. It's possible to get people to not do this, but it requires fairly rigorous training. tptacek wrote about this a decade ago: https://sockpuppet.org/blog/2015/03/06/the-hiring-post/
I'd argue the "presentation and Q&A" format addresses that directly. The candidate gets to pick exactly what the interview is going to be about, at least at the beginning, so they have full control over first impressions. No gotchas at all. Who wouldn't pick something they're confident about?
If someone thinks Cmake is super cool and knows all sorts of great use cases for it, then they should present that. They should also be prepared to answer open-ended follow-up questions like "broadly speaking, how could a project transition from something like Automake to Cmake?" or "what are some footguns in Cmake and how can we avoid them?"
The idea is to pick a topic you're so jazzed about that your enthusiasm overrides The Fear.
One of the things I like to do on the hiring side is hold interviews in the smallest room people won't complain about. The way we think about public speaking has a lot to do with how close we are to each other.
> The question is: should compiler authors be able to do whatever they want? I argue that they should not.
My question is: I see so many C programmers bemoaning the fact that modern compilers exploit undefined behavior to the fullest extent. I almost never see those programmers actually writing a "reasonable"/"friendly"/"boring" C compiler. Why is no one willing to put their ~money~ time where their mouth is?
> I almost never see those programmers actually writing a "reasonable"/"friendly"/"boring" C compiler. Why is no one willing to put their ~money~ time where their mouth is?
Because it is not much harder to simply write a new language and you can discard all the baggage? Lots of verbiage gets spilled about undefined behavior, but things like the preprocessor and lack of "slices" are way bigger faults of C.
Proebsting's Law posits that compiler optimizations double performance every 20 years. That means that you can implement the smallest handful of compiler optimizations in your new language and still be within a factor of 2 of the best compilers. And people are doing precisely that (see: Zig, Jai, Odin, etc.).
I'm willing to write a C compiler that detects all undefined behavior but instead of doing something sane like reporting it or disallowing it just adds the code to open a telnet shell with root privileges.
Can't wait to see the benchmarks.
I was thinking more along the lines of detectable instances with compiler introducing "optimizations", but as a C "programmer" I do not mind bounds checks and any other runtime improvements that stay true to the language.
If it's implementation-defined that you can turn them off when you're building for the PDP11, I'm sold.
Compilers already warn when they detect _unconditional_ undefined behavior. They just don't warn on _conditional_ undefined behavior because doing so would introduce far too many warnings.
Exploiting undefined behavior for optimization only requires local analysis, detecting whether that undefined behavior arises (either unconditionally or at all) requires global analysis. To put it differentially: The compiler often simply doesn't know whether the undefined behavior arises, it only knows that the optimization it introduces is valid anyway.
I was willing to, but the consensus was that people wouldn't use it.
C and C++ programmers complain about UB, but they don't really care.
TCC is probably the closest thing we have to that, and for me personally, I made all of my stuff build on it. I even did extra work to add a C99 (instead of C11) mode to make TCC work.
Shouldn't your blog post then condemn the C community at large for failing to use a more "reasonable" C compiler instead of complaining about compiler authors that "despite holding the minority world view, [...] have managed to force it on us by fiat because we have to use their compilers"?
You don't have to use their compilers. Most people do, because they either share this "minority" world view or don't care.
My blog post doesn't condemn the C community at large because it was the HN comments on that post where I found out that the C community is a bunch of hypocrites.
Designing a new VCS, operating system, and GUI framework because those are the three things I am most worried will be taken over by corporate interests.
I don't know which one I will actually implement, though.
My problem with Kagi is the AI junk. I get that people want it, but I do not, and I don't want to pay for it. In other words, if there is no "zero AI" plan that is cheaper than the equivalent one with AI, then count me out.
And it doesn't even have to be that much cheaper. Maybe 15% or so.
That said, a lot of people want it, so I hold no grudge that they do it. It certainly is reasonable use of AI compared to others.
> it doesn't even have to be that much cheaper. Maybe 15% or so
Isn’t this the $10 versus $25 plan? Are you saying you’d feel better if they charged you $9 and turned off quick answers? (Which are totally optional to use.)
The $10/mo. plan is technically not "without AI". You still have FastGPT (quick answers), meaning any query ending with "?" aggregates an answer based on the scraping of the top search results, rather fast. And you have Kagi Translate, and Universal Summarizer.
So there isn't a "no AI Kagi subscription".
But the search itself is worth $10/mo. if you ask me.
I subscribed a year before I bought into AI, and I was very happy with it.
I believe you're funding a free tier more than you're funding AI product research.
The AI stuff is mostly easy to ignore, just don't end your queries with a question mark. Image search is more annoying, though, because you have to click a button every time to remove AI results. You can't set a persistent preference.
I should be able to go into my settings and just say “don’t ever show me AI search results unless I ask, which doesn’t mean using a normal, commonplace punctuation mark”.
It’s very frustrating that “turn it off” is a fucking afterthought to “turn it on” for a service I pay for.
> “turn it off” is a fucking afterthought to “turn it on”
Kagi is an AI-oriented company from the start, not an “… and AI!” gimmick. AI-curated search results are believed to eventually replace normal search results.
If I just want a factoid on a website, I might as well have a program crawl and scan for it.
If I want the summary and I don’t bother reading the long version, I might as well have a program crawl and summarise.
If I want specific details in a big document, and I am not familiar with the wording to look for, I might as well prompt.
It’s like complaining about shellfish at a seafood restaurant. :-)
Kagi was not, to my memory, positioned this way early on when I started paying for it. It was "search but not dogshit because we're not being sponsored by the search results".
If I want a factoid on a website, I want to see it with my own eyes, not filtered through some non-deterministic shit machine that might tell me the right answer. To each their own but me getting what I want doesn't get in your way at all, so we should both be able to use Kagi the way we want to.
reply