Hacker News new | past | comments | ask | show | jobs | submit | idspispopd's comments login

As with any graphics format, different formats have strong points. One wouldn't use PNG for a portrait and a JPEG for a block of text.

So I look at PDE and notice that its few points and colour contours bear a lot of similarity to a basic vector format. I wonder if a simplified vector format that utilises tracing combined with blurring effects would best PDE.


The "PDE" approach is working by effectively setting some boundary conditions and solving a differential equation. The smoothness of the contours comes from the fact that they are generated implicity by a relativity simple set of parameters. A typical vector format would instead record each contour explicitly, which, even after smoothing, is a lot of data.

The big idea is to describe the image intensities as a continuous surface which can be approximated in 3+ dimensions, not a patchwork of vector areas.


For something that already looks like it should be drawn with vectors, maybe. One thing to remember is that if an image is made up of points, those points can be put in order and encoded as a one number pixel offset from the previous point.

For a vector to encode a sky gradient you would have to have a shape and a specific falloff that blends into another shape and falloff to create the gradient.


I think they just want something to complain about and aren't actually on mac os. I've already seen a number of examples which are simply fairy tales. (Such as claiming they're being prompted when opening ordinary jpg files.)

That said, even the author notes that this is designed to be forked. It's not a script to run on one's own machine blindly, the idea being to exclude what is not to your taste/while demonstrating features which can't be changed inside the gui. (It's also very helpful if setting up multiple accounts.)


yep definitely hell banned :)


Probably worth noting, since a few stories have left this out: The phones labelled "Sony" aren't actually Sony designs or devices. They're fully apple designed, most likely using the brief of "what would sony do here."

It's being brought to court as evidence because it could be used to indicate that Apple appropriated design inspiration from Sony. (Such as the clie.)

It's interesting to see this being brought up as evidence, because if you were to remove the Sony logo it doesn't actually convey much of a Sony 'feel' at all.


Contrary to the tweets, I think it's perfectly fine to be upset about sparrow no longer being developed.

The days of software being unchanging are gone, that's a pre-internet way of thinking about computing, and it's perfectly fine to be upset that something you rely upon is no longer supporting you into the future.

We live in an era where there is an expectation that our software titles will keep pace with the rapidly changing nature of technology. The business model of ios is simple, the vendor takes 30%, while the developer is free to lure new customers with additional innovations/features without having to worry about the expense of updating everyone who has purchased the title already. It's the life blood of competition and a business model that many titles adhere to.

So whether the developer is updating the code for new devices, adding software features that are relevant to new emerging technologies or simply staying relevant by supporting the latest standards. We treat software like a journey and not a static point. Software titles compete by out innovating each other. The moment this stops the software title is dead, it's competitors overtake it quickly and rarely would any of us rely on a piece of software that is no longer being developed.


In most cases software or firmware patching is sufficient, even in the case of some heat issues. If it can not be fixed via patch or only affecting a small batch of users, then returns are a better approach to distributing firmware.

If it's widespread then google didn't do enough intervention testing(QA processes). Which wouldn't be unusual since even mattel are guilty of letting intervention testing slide. (It slows production and pushes up costs.)


If it's an overflow/waterfall type issue(for lack of a better descriptive term) then turning the screen off and on will fix it temporarily until the trigger (such as heat) sets it off again.

It's less likely a calibration issue because the rest of the screen is performing accurately, and the behaviour is inconsistent.


If it's heat from the hardware, that can still be fixed with either more optimised drivers, or outright capping performance.


Quality of membership(i.e actual use) is more important than raw membership numbers. Those who seek out LinkedIn/Facebook/others are likely to be more interested in using the service than someone who joined G+ because their default search engine asked them to enter their google services password to instantly become a member.

A comparable example is Apple's Ping. Apple doesn't tout Ping as a success because it has millions of members (who were similarly presented with a trivial sign up method in iTunes 10 - Ping registered over 1M users in the first 48 hours.) Ping, however, is rarely used by the bulk of it's members, as such it's being discontinued. Most people signed up just to see what it was like, they didn't stick around. Facebook/LinkedIn don't have this problem so much, most people already know what the services are for, so they don't need to create a profile just to see what it's like.

G+ is growing, but it's an overstatement to say that it has the same level of user engagement as facebook/LinkedIn/others.

While many continue with the faulty logic that G+ is a superior experience/platform so it should become the dominant service, need to revisit the learnings of Betamax: technical superiority does not equate to automatic or guaranteed success.


I agree, designing in pixels has always been the wrong way to go about it. It is only needed when there is a restriction in the number of pixels available to work with(such as displays that have large pixels.)

The beauty of high pixel density screens is that we can literally output our vectors/high resolution rasters to the output size and not have to worry about massaging individual pixels for the best clarity. The advent of high pixel density screens allows us to exclusively think in terms of the end size on screen, instead of being bogged down with pixels dimensions. This high density makes design easier, not harder. Also the concept of different pixel densities is not new, screens have always had different pixel densities.

The reliable thing about 'retina' screens is that we can think of the pixel problem as 'solved' and just prepare artwork to pass the retina test, instead of trying to match it perfectly for every higher pixel density out there. (A level of accuracy that won't be easily seen by the user.) The same thing is done in print everyday, 300 dpi, 600 dpi, 800 dpi, it doesn't matter, past a certain point the end user isn't going to casually notice the extra detail.

Designers that have been working in print would see the analogy to various print device resolutions which are each measured in lines per inch. Again the approach is to think in terms of the final dimensions, and not get bogged down with the individual device resolutions.

I understand that designing for 1x can be problematic, but it's the same workflow as designing for any foreign pixel ratio/pixel density (such as non square pixels used in certain types of film.) It's just another step in the work flow and testing often on a device is useful way for the designer gets the hang of it.


I wonder if the author is wondering why there is a sudden influx of traffic after 6 months.

There is more interesting discussion on the ~7 months passed HN thread: http://news.ycombinator.com/item?id=3376620


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: