I don't understand how you can improve performance continuously while adding features, unless you optimize parts of the program unrelated to the new features...
Common excuses people give when they regress performance are, "But the new way is cleaner!" or "The new way is more correct." We don't care.
It's worth pointing out that increasing CPU speed will always make your software faster, but it will never make it more correct. Considering all of the WebKit browsers I've used seem to crash every half-hour, I think the WebKit team may want to rethink their software engineering strategy. Getting the wrong answer really fast is pretty much useless.
>Considering all of the WebKit browsers I've used seem to crash every half-hour
Funny. I've been using Chrome regularly for about 2 weeks now, and the only crashes I've had are when it tries to start emusic's download app. I have one window open with a few tabs, and it's been open for about 2 days now.
Either something is wrong with your setup, or something is unusually great with mine.
I think you're misunderstanding what they're trying to say. I think they're trying to escape the perfectionism that can stall a project and halt progress.
They have a very extensive testing framework that tests for correctness. Even if they didn't, if a browser isn't doing something correct, people would notice right away. Their "we don't care" assumes people working on the project are competent and will produce code that is good enough™.
Modern WebKit browsers tend to be extremely stable, enjoys a clean code base, and very fast, so they're doing something right.
that's a cop out. that's why for the most part my computer never seems to get faster, because the apps are getting slower. I doubt they are getting 'more correct'. I wish more projects thought like webkit and thought that performance was one of there #1 priorities. In fact the only thing to sacrifice performance for is, 1, security, 2 stability. I haven't had any problems with webkit browsers on windows (not that I used chrome for long), and the linux stuff isn't a straight port, I think.
Declare that the previous code obviously wasn't functional and therefore the performance is invalid?
Following this literally is a recipe for getting stuck in a local optimum. I don't think they follow it literally, which then raises the question of, why phrase it that way? Probably because A: It gets attention and B: It establishes the expectation that deviations will be "nonexistent" (read as: rare), which keeps everyone on the same page in the debate, instead of having to refight the performance fight on every patch.
Read literally, it really doesn't work, because performance is not a scalar value and there's no universally correct way to say that the performance of program X is better than program Y. Testing to benchmarks only mitigates that, it doesn't really eliminate it, because yesterday the benchmark suite was one thing, today it is another, and tomorrow it'll be yet another thing, as it gets changed.
You've just setup a question with a yes/no answer, when given one answer said "I don't believe it" and when given the other, smugly replied "I knew it!".
Why couldn't they block security patches that impact their benchmarks?
It's not like anyone real ever gets hit by zero day "a carefully crafted PNG file fed into Flash version x.099.31 over an HTTPS connection to a site with an expired SSL certificate can gain access to cookies from domains with a Q in them" kind of exploits.