Hacker News new | past | comments | ask | show | jobs | submit login
The way to make a program faster is to never let it get slower. (webkit.org)
33 points by helium on March 5, 2009 | hide | past | favorite | 18 comments



I don't understand how you can improve performance continuously while adding features, unless you optimize parts of the program unrelated to the new features...


They started with a working codebase on day one -- khtml.

It's a lot easier to hold to this rule while polishing a working product than to break fresh ground.


Common excuses people give when they regress performance are, "But the new way is cleaner!" or "The new way is more correct." We don't care.

It's worth pointing out that increasing CPU speed will always make your software faster, but it will never make it more correct. Considering all of the WebKit browsers I've used seem to crash every half-hour, I think the WebKit team may want to rethink their software engineering strategy. Getting the wrong answer really fast is pretty much useless.


>Considering all of the WebKit browsers I've used seem to crash every half-hour

Funny. I've been using Chrome regularly for about 2 weeks now, and the only crashes I've had are when it tries to start emusic's download app. I have one window open with a few tabs, and it's been open for about 2 days now.

Either something is wrong with your setup, or something is unusually great with mine.


My experience with Chrome and Safari is that they're pretty stable.


I think you're misunderstanding what they're trying to say. I think they're trying to escape the perfectionism that can stall a project and halt progress.

They have a very extensive testing framework that tests for correctness. Even if they didn't, if a browser isn't doing something correct, people would notice right away. Their "we don't care" assumes people working on the project are competent and will produce code that is good enough™.

Modern WebKit browsers tend to be extremely stable, enjoys a clean code base, and very fast, so they're doing something right.


that's a cop out. that's why for the most part my computer never seems to get faster, because the apps are getting slower. I doubt they are getting 'more correct'. I wish more projects thought like webkit and thought that performance was one of there #1 priorities. In fact the only thing to sacrifice performance for is, 1, security, 2 stability. I haven't had any problems with webkit browsers on windows (not that I used chrome for long), and the linux stuff isn't a straight port, I think.


If you want some more insight into how the team works, this is a very good post as well: http://webkit.org/blog/174/scenes-from-an-acid-test/


You wonder what they do when a security fix slows Webkit down.


Declare that the previous code obviously wasn't functional and therefore the performance is invalid?

Following this literally is a recipe for getting stuck in a local optimum. I don't think they follow it literally, which then raises the question of, why phrase it that way? Probably because A: It gets attention and B: It establishes the expectation that deviations will be "nonexistent" (read as: rare), which keeps everyone on the same page in the debate, instead of having to refight the performance fight on every patch.

Read literally, it really doesn't work, because performance is not a scalar value and there's no universally correct way to say that the performance of program X is better than program Y. Testing to benchmarks only mitigates that, it doesn't really eliminate it, because yesterday the benchmark suite was one thing, today it is another, and tomorrow it'll be yet another thing, as it gets changed.


I think they do follow it literally. If you want to introduce a slowdown, the price is that you have to make something else faster at the same time.


So you're saying that the Webkit team would delay a security fix while hunting for a performance offset for it? I don't believe that.


Security fixes are probably not included. But a security fix isn't usually going to be the sort of thing that slows down a benchmark anyways.


I hope you're right!


Do you have evidence for this?


> Following this literally is a recipe for getting stuck in a local optimum.

Nope. It merely means that you don't leave a local optimum until you have something better.

In other words, there's no continuity requirement.


In other words, there is an excuse for a performance regression. I wonder how many others there are.


You've just setup a question with a yes/no answer, when given one answer said "I don't believe it" and when given the other, smugly replied "I knew it!".

Why couldn't they block security patches that impact their benchmarks?

It's not like anyone real ever gets hit by zero day "a carefully crafted PNG file fed into Flash version x.099.31 over an HTTPS connection to a site with an expired SSL certificate can gain access to cookies from domains with a Q in them" kind of exploits.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: