Hacker News new | past | comments | ask | show | jobs | submit login

> But why do web developers want navigation transitions? In order to emulate native apps, of course.

If we're doing unpopular opinion time, I think this is the real issue that no one wants to talk about. I'm all for whatever feature development the community can standardize. Apple, google, mozilla, and etc. all have smart people working for them and when they cooperate things seem to work out fine. The problem is trying to achieve the feel of native applications in a _non_ native application. Browsers should do one thing well, but that one thing has no real business being duplication of mathematica, quake, or visual studio.

It's always interesting to see a Show HN where someone cleverly gets an approximation of one or another of those things running in the browser, but come on. Making the browser the operating system is simply going to push the same old incompatibility problems into the browser. It's not going to solve them.




I'm sorry but how would making browsers the operating system cause the incompatability problems to reappear? I think this is massively ignoring the realities:

- there's almost no native-feeling cross-platform UI library for native apps at the moment

- There's a standard for most things happening in a browser, which means that things look pixel-perfectly the same on a huge amount of browsers. Differences among current browsers are more among performance and experimental features

So currently, if Google, Apple, and MSFT decided to jointly ban all native apps and only have a web browser on their smartphones, we would have a much better compatability situation, since so much functionality has been standardised on that end, and _actually respected_.

The browser is basically what the JVM tried to be, a write once work anywhere solution that is pretty well sandboxed. It's miles better than pretty much every other cross-platform piece of tooling in existence.


There's a standard for most things happening in a browser

There's POSIX, too. In practice, divergences are plentiful.

Differences among current browsers are more among performance and experimental features

The fact that are huge web tables meticulously documenting feature compatibilities across browsers implies differently.

Besides, a lot of web standards are done post-facto. I don't see the situation as being any better than POSIX. It might even be worse.


> There's POSIX

POSIX + a standard library for UI components + containerization + webassembly would be the "correct", cleanroom solution in my opinion.

If you want to build and run "native but untrusted" applications on the user's computer on heterogenous environments and can't anticipate which APIs someone is going to need next. Well then give them access to a walled-off part of the computer.

Same-origin policy? Replace it with a l7 firewall (+ some DNS security manifests?)

No cookies/local storage? That just means spinning up a new container every time the user visits.

No audio playback? Don't map the sound device into the container.

Etc.


Sounds like you're really poorly reinventing Inferno. I don't even understand why you would need WebAssembly in this hypothetical scheme.

Container namespacing isn't fine-grained enough, and much of this is the result of traditional Unix-likes having large disparities between different ways of naming things that cannot be composed. The Spring solution was to have all of these descend from a name service, and the Plan 9/Inferno solution to have all resources be multiplexed into virtual file systems (i.e. one way of naming things).

Furthermore, the inclusion of containers here gives you a worst-of-both-worlds approach. You're not even detaching resource subsystems from a unit of execution's POV, you're cloning the entire OS namespace modulo disallowed subsystems for every unit of execution. That's a combinatorial explosion of the state space.

It also means complicated solutions to the problem of resource management and communication within and between containers, which is a problem when virtualization occurs at a too high level. In contrast, virtualization at the process/task level with OS-wide capabilities for single-system imaging and a heavily integrated subsystem for naming things (either a one true approach or a prototype-based one) means you can configure a system like it's a whole unit without sacrificing security.

Then it's not like POSIX is even that great. Why should a process be associated with one uid? Why can't I dynamically attach and remove capability tokens from processes while they're running, like the Hurd does it?


I'm not too fixated on posix + containers. It's just what I'm familiar with.

It's more about what the complex web applications are trying to do: Run in an isolated context, run compiled code from whatever language the developer preferred and utilize various low level features in ways that people didn't predict in advance.

Basically, what's irritating is that the web standards committees are trying to hand-craft poor knockoffs of lower level APIs, one at a time in ways that are incompatible / don't interoperate well with existing native software. I mean you can't even pipe data in/out of a tab if you wanted to.

Instead it might be better to look at already existing APIs that have been refined over years + some security namespacing.


I'm sure you can do better than the POSIX APIs, but I think it's hard to argue that POSIX APIs are better than the Web APIs. POSIX is a terrible API with decades of cruft.


web apis are incredibly restrictive, don't interoperate with native code at all and generally are a tiny subset of what native APIs provide.

Personally I prefer to put up with some cruft than not being able to do something at all.


POSIX is actually pretty horrible, if you look at the details. Guess what the "close" function does (hint: you can't use it to close a window or a sale). It requires the use of three-letter paths like /tmp.

(Related fact: While Windows is still as bad in that it gets its own things wrong, at least they weren't tied to POSIX's filesystem layout)


You're right, there's no native-feeling cross-platform UI library for native apps, and the browser isn't poised to be an exception to that. Apps made in the browser don't feel native. The closest to native-feeling app built in a browser that I can think of is Atom, which feels like a more sluggish version of Sublime (a Qt app). Neither of them feel as native as truly native apps do on their native platforms. Not nearly as native as Gedit on GNOME, Kate on KDE, Notepad++ on Windows or TextMate/BBedit on Mac.

And they never will because the native feel comes from paying close attention to the design language and behavior of your app with respect to the host platform. If a cross platform app really paid close enough attention to these things to make a difference, they may as well completely rewrite their UI layer on all platforms.


>there's almost no native-feeling cross-platform UI library for native apps at the moment

wxWidgets is native and it has been working for decades now. May be some people disagree with its use of macros, but it certainly works, runs fast and it's native in Windows, OSX and Linux.

I repeat it again. It's not only native-feeling. It uses the native controls in each platform it supports. It is fully native.


It is more native than toolkits which try to recreate the controls themselves. But I would not call it "fully native", because it does not let you write cross-platform fully native-feeling apps. It fails at that for the simple reason that that is impossible: different platforms have different UI conventions, including icon appearance, controls with no direct equivalents on other platforms, or with equivalents that have subtle differences in usage, idioms and standard layouts, behavior of keyboard shortcuts and gestures, use of platform-specific features, etc. If you do not spend a significant amount of time tuning your UI for each, you may end up with something that's 80% right, but it will never be 100% right.

Also, wxWidgets has had a lot of problems lately on OS X: high-DPI not being supported properly for years, broken focus, other bugs. It's not just the macros that make it an icky framework, which I say from unfortunate first-hand experience.


I'm curious about this. Does something like wxWidgets offer all widgets that are available in an OS?

Does it offer the files listing widget the Mac has in that fourth mode, where flipping between files shows on top an extra area that feels like flipping between music CDs?

I guess the point is, what do you do with widgets available in one OS that aren't available in another? Should UI libraries keep up?


It keeps up in some cases, as some dialog options are only available for one platform and ignored in the rest.

But it is updated slowly, i.e. the platform has a new widget and the corresponding wxWidgets update happens several months later.

You can check for example Audacity in your platform and evaluate if it feels native enough.


> There's a standard for most things happening in a browser, which means that things look pixel-perfectly the same on a huge amount of browsers <snip>he browser is basically what the JVM tried to be, a write once work anywhere solution that is pretty well sandboxed. It's miles better than pretty much every other cross-platform piece of tooling in existence.

That's a good theory. My experience as a consumer of web sites across chrome, firefox, chrome on mobile, and firefox on mobile leaves me with a perspective more aligned with reality than theory. The same pages vary widely across these platforms. It's not uncommon for form fields to be illegible on one or more of them, layout is different, readability is different.

Maybe once we get that little problem ironed out (whether it be people building websites badly or things aren't as compatible as they should be, I don't know) we can talk about taking it further...


Agreed.

I feel a little twinge every time I hear someone say that something on The Web should be or is "pixel perfect". That phrase has a particular -and obvious- definition; one that is rarely correct when speaking of a web page that is viewed in more than one browser and/or on more than one machine.


> The browser is basically what the JVM tried to be, a write once work anywhere solution that is pretty well sandboxed. It's miles better than pretty much every other cross-platform piece of tooling in existence.

I'd say Flash is better than a browser. Flash seems to fall apart when embedded into a browser, but stand-alone it is decent. And it's IDE/tooling is top notch too.


>there's almost no native-feeling cross-platform UI library for native apps at the moment

Xamarin.Forms is the closest I can think of. Qt too, kind of.

>The browser is basically what the JVM tried to be, a write once work anywhere solution that is pretty well sandboxed. It's miles better than pretty much every other cross-platform piece of tooling in existence.

It's true, the browser succeeded as a working everywhere JVM equivalent.

It also succeeded at reimplementing half of your OS features, being another massive source of security issues, being inefficient with resources (chrome memory usage lol), having piss poor perfomance and locking everything to Javascript. Compiling to Javascript is not a solution. A proper bytecode is. Which we may have with WebAssembly as soon as it's implemented and the standard respected by browser builders (which will probably be around 2040). Until Google decides to add a new WA feature that's incompatible with other browsers because that's what Google does. Let's not get started on the fact that HTML is a terrible language for UIs and that any Javascript based solution is not a solution at all (i.e. React is still crap, as is Meteor and javascript.framework.of.the.day.js).

The web as an application delivery medium is a failed experiment, patched up on all sides and held together with 40MB of polyfills on every page.

> if Google, Apple, and MSFT decided to jointly ban all native apps and only have a web browser on their smartphones, we would have a much better compatability situation, since so much functionality has been standardised on that end, and _actually respected_.

1. We've been hoping to get at least TWO browser vendors to fully collaborate for the last 20 years and it has not happened. It won't happen.

2. Performance will still be crap.

3. I'd rather not get locked to Google's piss poor record of updating the stock Android browser. In fact, I'd rather not get locked to Google's piss poor software at all, thank you very much. The same argument applies to Apple and Microsoft. Safari is awful, and so is IE/Edge.


I get where your coming from, and agree with a lot of what you say.

BUT, Fundamentally...

>The web as an application delivery medium is a failed experiment, patched up on all sides and held together with 40MB of polyfills on every page.

Is just not a true statement. Or rather, the web sucks, but it is the only cross platform UI experiment that has 'worked' on any significant level.


> the only cross platform UI experiment that has 'worked' on any significant level.

Users do not care about cross-platform compatibility, at all. They only care that it works on their platform. As developers, we should be cheering for a diversity of widely-popular, mutually-incompatible platforms because there will be more work for developers to port the iOS version to Android to Windows, etc.

The only people who should be upset about cross-platform compatibility issues are budget-conscious managers and unfortunate OSS devs.


>As developers, we should be cheering for a diversity of widely-popular, mutually-incompatible platforms because there will be more work for developers to port the iOS version to Android to Windows, etc.

This is like saying "As construction workers we should be cheering for natural disasters, because there will be more work for construction workers to rebuild destroyed cities."

Job security is great- but at what cost? I'd rather see developers making completely new things than wasting time porting from one native platform to another.


And honestly, whether as a construction worker or a programmer, I, too, would rather be building new things than rebuilding the same goddamn thing over and over again.


What a hyperbolic analogy. No, porting to new platforms is not like recovering from a natural disaster, and rooting for competing platforms is not like cheering for the misery of a disaster.

And for the record, doing a port well (as opposed to a hacky, broken one) frequently requires lots of creativity and technical ingenuity.


> Users do not care about cross-platform compatibility, at all.

I've heard that some users have both a phone and a laptop.


No one has a laptop anymore. We're in the "post PC" era, haven't you heard?


Well, Java has succeeded, too. Look at Minecraft.


Minecraft has no meaningful UI; the entire window is just an OpenGL canvas.


The web has no meaningful UI. The entire window is just styled elements.

UI is not always about respecting the OS's guidelines/theme. Firefox, for example pretty much ignores everything since their Australis UI update and it is fine.


Look at Google Chrome. You can’t even use any <button> or <input> that looks natively in Chrome – it’s just as native as the old Swing styles.


Given that every other version of applications/oses changes the definition of what "native" looks like... I don't know that it's a huge problem...

Beyond that is the fact that Swing was fugly everywhere, and bootstrap looks decent everywhere.


Well, JavaFX’ styles look beautiful everywhere, too. (And it even uses decent XML and JS for styling and interaction).

I just argued against the parent poster, who said Java was different than browsers, because you could not make native UI.

Which you actually can very easily.


That was a fluke, and even Notch knows it.


Chrome has been the standard Android browser since KitKat, and it auto-updates since Lollipop. Ditto the system webview.


It's such a shame that there are still new Android 4.x devices being produced...


isn't 4.x kitkat? It's the 3.x devices that cause issues, and those are preeeeetty old.


4.4 is KitKat. 4.0 is ICS, 4.1-4.3 are Jellybean releases.

For native apps, anything Jellybean or better is reasonably easy to deal with (and ICS isn't that bad). But for webapps, the system webview was the horrible default Android browser up until KitKat, and didn't get auto-update until Lollipop. Android Browser has a number of outright bugs in its spec compliance (when I was developing for Google Search, we thought of it as the new IE6), and you lose any sort of GPU acceleration or fast JS engine that you might want for animations.


> Let's not get started on the fact that HTML is a terrible language for UIs

This is why I'm hoping that future web front-end frameworks will output UI elements to a <canvas> instead of the DOM.

If a DOM-like scene graph is needed, it should be managed entirely by the framework and simply be an intermediate layer between the application code and the <canvas>.


As a blind user and developer I am absolutely terrified by such a prospect. We've just started to get decent accessibility for the plethora of new web features with ARIA and similar. But if you render everything to graphics output directly to a canvas you lose any capability of ever talking to my screen reader. Text is awesome for both people and machines, let's not throw it out.


Hey, at least we'll get a bunch more mostly-pointless JavaScript features to support accessibility on canvases!


I think this is shortsighted. In 5-10 years browsers will be more Virtual Machine than Web Browser. You can almost argue that they already are.

As browsers and phones get more and more performant, the idea of emulating native apps in the browser won't have the performance stigma attached to it.

This same thing played out with Java et. al. It was dismissed for being non-performant when compared to C/C++(aka native code) because it ran inside a heavy VM but nowadays the performance is just fine for an enormous amount of applications because the virtual machines and hardware got faster.


The JVM itself got faster -- and in many places, relies on native code for particularly expensive functionality -- but it's still unusable for end-user applications because:

1) The long load time / hotspot JIT overhead when starting applications.

2) The large memory overhead of the Sun/Oracle GC's reserved heap.

To some degree, Dalvik solved some of these issues, but has also retained the (very necessary) escape-hatch-to-native JNI.

JavaScript is SO MUCH HARDER to optimize than JVM bytecode. Whereas you can even feasible AOT-compile JVM code, JavaScript's dynamism makes this all but impossible.

On top of that, JS is welded to the DOM and CSS, the combination of which incurs huge rendering pipeline CPU and memory overhead for anyone doing anything as complicated as a "native" UI view hierarchy.

Rather than enumerate all the failures, I'll note that it's not all doom and gloom. We've got a web bytecode in the works that's much saner than JS. Canvas and WebGL exist.

Progress on those fronts may create a situation like you're describing, but -- and this is no great loss in my book -- it won't be the web.


Minecraft alone shows your first two points are not valid. Write a hello world app in Java, the startup time is miliseconds, the memory is not humongous.


(non-casual) Games aren't generally known for fast load time, sharing resources on a multitasking system, or short playing sessions -- all of which plays to the JVM's strengths (or avoids its weaknesses).

As far as fast load time; just launching my (JVM-based) build tool (sbt) takes 7 seconds on my laptop.

Running 'Hello World' from an uncompressed class file takes 0.12s, and most software is quite a bit larger than a couple lines of bytecode in an single uncompressed class file.

The C version, by comparison, completes in 0.005s.


Yes, recently, Java actually became usable for a lot of seemingly native apps.

And it does this a lot better than the browser.


> [The JVM is] still unusable for end-user applications because [it makes application startup slow and uses too much RAM.]

I can't agree.

The poster child for "bloated, overweight, slow-as-shit Java application" is Eclipse. Circa 2013, I found it to be not substantially slower or any less usable than Visual Studio on the exact same hardware.


That's implying that Visual Studio isn't also bloated, overweight, and slow-as-shit - three traits which VS serves as the textbook definition of. It makes Emacs look like ed in comparison, yet without offering any actual improvement in functionality.

In other words, "but it's on par with Visual Studio" does absolutely nothing to counter the argument that the JVM is unusable for end-user applications; if anything, it proves it.


After several years of working with both VS and Eclipse, I can say that IDEs are slow because they're doing a lot of work. (Eclipse suffers a bit because the Eclipse Steering Committee[0] won't let go of the dream that is the Eclipse Platform, but that dream primarily damages the ease of writing software for Eclipse, rather than Eclipse's execution speed.[1])

Consider two applications that each solve tasks of comparable complexity:

One of these applications is written in C++. The other is written in Java. Both are written by programmers skilled in their respective languages.

If both of these programs have roughly equivalent performance, what does that say about each underlying language?

[0] Or whatever their official name is.

[1] Source: In a former life, I was tasked to write and maintain bespoke software written on top of the Eclipse Platform. I cannot recommend the Platform for any new development that isn't writing software for Eclipse, itself.


>It makes Emacs look like ed in comparison, yet without offering any actual improvement in functionality.

Well, it does offer the functionality of being usable integrated environment, which I know a lot of the emacs/vi people dismiss, but it's a very real thing despite those dismissals.

On the other hand, I don't really care much, since I've never had the emacs/vi user disease of insisting that everyone else acknowledge that my opinions are superior.


> Well, it does offer the functionality of being usable integrated environment, which I know a lot of the emacs/vi people dismiss

Maybe the vi people do, but I'm not sure about the Emacs people, seeing as SLIME is the modern day free integrated environment for Common Lisp programming.

Regardless, my point was less about the superiority of Emacs and more that Emacs is the oft-cited textbook example of a "bloated" program (as per the "Emacs is an excellent operating system; too bad it's missing a good editor" joke), yet is dwarfed in size by the likes of VS and Eclipse while having parity or near-parity functionality-wise.


The "Emacs is bloated" meme stopped being relevant at least fifteen years ago. ;)

But seriously; some serious questions, seeing as how I've never really used Emacs:

How does Emacs compare to versions of Eclipse CDT or Visual Studio released in the past four years for C++ development? How well does Emacs's autocomplete [0] work for templated types of varying complexity? How about its ability to navigate to method definitions and implementations of non-obvious types? Does its autocomplete database update quickly and automatically, or must one trigger the update oneself?

[0] I figure that C++ development support isn't in mainline Emacs and is provided by extensions. For brevity's sake, I'll continue to improperly attribute this functionality.


In fairness Visual Studio (and most IDEs) are pretty bloated and slow.

As to the grandparent, that is what asm.js was for, was a subset of JS that could be heavily optimized and JITed... I also really prefer .Net method of native invocation over JNI stubs.


After several years of working with VS, Eclipse, and a couple of other developers' IDEs, I can say -with a straight face- that developers' IDEs are slow primarily because they do a lot of work.


Absolutely.. and if you're using one on a system without an SSD, it's outright painful. There's a lot that goes on with IDEs, effectively on-demand compilation, file watching, and there's a lot to that.

Of course, sometimes I find I'm actually more effected when I can just work on a small module in a plain text editor, and keep things organized so its' relatively easy to follow.


Oh, so a dumb terminal. Ok. It's 1980 again.


1. Dumb terminals never really left us. They are in active use all over. 2. The current web trend is using more of the browser CPU, as opposed to the server CPU. Heck, I have entire apps that don't call the server after load at all. I don't remember being able to execute code in a dumb terminal (possibly you could...I just don't remember it).

Dang it, now I'm nostalgic for playing MUDs on a VT1000 in the 90s.


If it can do everything the user wants then what is the issue?

Just because it's been done before it's the wrong answer?


My reply was a bit ambiguous and snarky.

I don't believe it's the wrong answer if it works. But people are commenting about moving the web forward to run apps in browsers and all and so there are parallels with what we did in the 80's. That's really what I was getting at. Nothing in computing is ever really new.


We're replacing a very nice custom SCO Unix backend developed at my work from 1980s onward (drops users into custom shell, multiple programs execute in the user context, leverages all the nice features of *nix) with a purchased desktop Windows ERP application and it makes me so sad.

Literally we will move from having 1 to 2 admin who can manage 300 users, decades of automation, and totally instantaneous program execution (in C) with heavy user load on a single server to distributed, client based programs running on separate machines that all have to be patched, updated, troubleshot, etc.--(and the 5 person support team that has to go with it), communicating with a central program server and separate DB servers, etc.,etc., all because the console application is "too old looking".


A web based application could have a very similar backend, and you could run pretty much any modern OS client on the front end...

Though I think the development and training for your new front end will be a bit cumbersome no matter what you use.


You might be interested in the in the works WebAssembly, if you haven't heard of it already.

https://blog.mozilla.org/luke/2015/06/17/webassembly/


But the only languages available are HTML, CSS and JavaScript which are just more than bad for application development because of so many reasons.

I would rather see something like Ubuntu Touch tries where they write the frontend with QML and you can have backends in C++ for speed.


> But the only languages available are HTML, CSS and JavaScript...

Eh, there is the canvas, and WebAsm. They make for a full blown VM if you don't mind shipping a several MB large interpreter with your page.

Next step should be installing the interpreter on the browser, for everybody to use... Then we can create Flash all over again.

(Now, if somebody gets a way to do that full VM thing in a way that is compatible with the DOM, then we'll have some improvement.)


>"Web ASM [...] installing the interpreter on the browser, for everybody to use... Then we can create Flash all over again."

That's ridiculous. Is Javascript "like flash" because most browsers ship a several MB interpreter for it?


Javascript creating random elements on the canvas? Yep, just like flash.


> But the only languages available are HTML, CSS and JavaScript which are just more than bad for application development because of so many reasons.

I would have completely agreed with that a few months ago. HTML/CSS/JS is fine for most websites which are primarily text, images, and a couple buttons. But it breaks down quickly when trying to build an application. Then React was invented. I don't know if React will be the thing people use to build web applications in the future, but I think the concepts it introduced will be. The reason is because React provides a foundation that makes it feel more like building a desktop UI or video game than building a traditional website.

Performance will improve over time with faster hardware, faster networks, and software optimizations. Javascript V8 is already one of the fastest scripting languages especially for string handling.


But that is just fiddeling with the symptoms, not doing anything with the root of the problem. It is the same with webcomponents, they're building on such a shaky ground.

I like the ideas around https://www.youtube.com/watch?v=6UTWAEJlhww instead.


Based on the video, is security the root problem? Sorry, but that is a problem in every programming environment. Security is hard regardless of using C++, JS, Linux, Windows, etc. Therefore it is not terribly relevant to the question: is JS capable of emulating native functionality and should it?

For that question, the root problem was definitely JS itself historically but the language deficiencies have largely been resolved by ES6. In my opinion, the only major language deficiency is handling int64 and larger numbers which will hopefully be resolved by "value types" in ES7. Then use a React-like framework on top and building applications feels "right", at least to me.


I'm not quite sure if you watched the whole video, he says let's get rid of everything (HTTP, HTML, CSS) but JavaScript.


Probably will be more like a URL link aware VNC viewer. That has some interesting implications for the back end WRT emulation.


I think he's wrong in his assumption that navigation transitions are being used just to emulate native apps. They are being used because they provide a usability benefit. A navigation transition can demonstrate a lot to a user about hierarchy and flow [1]

[1] https://www.google.co.uk/design/spec/animation/meaningful-tr...


Back before the PC, didn't IBM and some other mainframe companies try to lease out terminals? This in that you could get a terminal on your home desk, that either via dedicated line or basic dialup would connect to a mainframe where you paid minute or something?

In a sense the web browser, and SaaS, has become a reinvention of that...


Yes: https://en.wikipedia.org/wiki/Time-sharing

A popular (for the time) example of this type of service was CompuServe (https://en.wikipedia.org/wiki/CompuServe). I had my first online experience connecting to CompuServe on my family's home PC, back in 1985.


Still do, big business. Any data dependent system. Reuters / Bloomberg / Datastream. Rented out terminals of a centralized system.


Ah yes, Bloomberg. Recently heard about those terminals in relation to someone using their service as something of a luxury Craigslist...


With data dependent systems, I assume you pay for access to the data, not just for the ability to use a computer.


> But why do web developers want navigation transitions? In order to emulate native apps, of course.

This is just a misunderstanding of good UI. Emulating native apps is NOT inherently "Good". There are terrible native apps and great ones. Navigation transitions would be nice as a another arrow in our quiver. Using it wisely is another can of worms.


Browsers should have one built-in navigation transition. When you click on a link that leaves the page, something should happen immediately. It may take seconds for the new page to load, especially on cell networks. There's no immediate user feedback, resulting in multiple clicking, which restarts the page load. So on page exit, something visible should happen. Dim out the page, zoom it down to a tiny rectangle, swoop it off screen and replace it with a busy icon, or something.

(Unfortunately, you can't implement this with a Firefox add-on, short of patching every link in the DOM, or I would.)


> When you click on a link that leaves the page, something should happen immediately.

On every browser I use, the UI changes immediately after clicking a new link.

Pay attention the next time you click a link that's not in cache. Look for messages in the status bar (which is usually hidden, but typically becomes visible on resource load or when hovering over a link), changes of favicons to spinners in tabs, and the change of the reload graphic to a "stop" graphic.

All of these UI changes are obvious to me.

I get that some people are impatient, but these same impatient people likely know how to and are licensed to operate a motor vehicle. Motor vehicle operation is a task that requires constant attention and much finer attention to detail than is required to notice the current browser UI cues that indicate that a new page is on the way.


Here's Firefox, immediately after entering a site name into the URL window.[1] The only visible effects are that the tab name has changed to "Connecting", and the reload arrow has change to an "x". On Firefox Mobile, you don't get either of those, just a one pixel high progress bar at the top of the screen. It's worse on mobile, where things are slower and there are fewer auxiliary GUI items on screen.

At this point, you've only partially left the page. Some clickable items on the page being exited will still work. It's not entirely clear exactly when event processing for the old page stops.

[1] http://s3.postimg.org/53784icoj/afterlink.png


Those are things that happen to the UI immediately after asking it to navigate to a site that you typed in.

Now that you've demonstrated the UI changes that happen immediately after you type in a URL and press Enter, what UI changes happen when you click a link that leaves the page?

That's the complaint of yours that I was addressing in my comment. :)


I don't associate "navigation transitions" with native apps. What native apps have navigation transitions? I think the iPod really popularized sliding screens and the like, and they spread to mobile.

The feature for web browsers makes sense as progressive enhancement. Instead of using a "tool" (library), you just say "use this transition" and if the browser supports it, it does.


If you read further they link to a previous post that covers just that: http://www.quirksmode.org/blog/archives/2015/05/web_vs_nativ...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: