Hm, hiding the protocol and the host part in the address – what next, hiding the domain? However, the path is hidden already, what's left then?
Seriously speaking: If browsers are simplifying the URI scheme for the alleged benefit of users, how do we expect these users to know anything about addresses? Isn't this rather undermining security than enhancing it? Highlighting significant parts may be preferable to hiding those deemed insignificant. Moreover, regarding https, I personally prefer positive affirmation over lack of warning.
For me, this worked best for desktop browsers with the padlock icon (and, before this, the key icon) shown together with a display of link targets in a status bar as a separate, reserved area. (While allowing pages to overwrite `window.status` was certainly not a good idea.) A consistent display of the authority issuing the certificate of the current page in a status bar like this may be also nice. I'm not convinced that less but more opaque information is the way to go.
Dedicating 20 vertical pixels of virtual real estate to security relevant information may be worth it. It may be also easier to parse than an overloaded omnnibox/location/search/navigation/security/menu bar. Cutting down any information which is displayed too densely right from the beginning won't help the issue. How many bits of information are there in this "everything bar"? Yes, there's still a bit of grouping left, mainly by spacing, but color is mostly gone as a signal in order to make the information density bearable. So users will be applying quite an amount of selectivity when parsing this display, by this inevitably missing relevant information. (That this densely combined display is rather homogenous both for esthetics and acceptance just aggravates the need for selective parsing, which is likely to become a habit.) "We'll pre-filter this for you" isn't addressing the problem, it's rather "living with the outcome".
Edit: A legitimate reason for redacting the host name are extensive names, crafted to exceed the space available in the location display in order to deceive users regarding the identity of the host. Here, abbreviating by an ellipsis (compare text-overflow: ellipsis) in order to fully display the domain may be a way to go.
--
P.S.: What's the general lesson taught by such redactions by the browser vendor? That it is OK to ignore these things, as they are truly irrelevant? (Must be without significance, since Google told me so?)
> Hm, hiding the protocol and the host part in the address – what next, hiding the domain? However, the path is hidden already, what's left then?
You're right! Google and Cloudflare are already jointly destroying the meaning of URLs through their "AMP real URLs" specification which allows any AMP gateway to impersonate a host, guaranteeing authenticity of the content through public-key cryptography.
These changes to the fundamental bricks of the web are not without consequences. They are deliberate attacks on a free and neutral network of equal peers.
> which allows any AMP gateway to impersonate a host
Or, from another perspective, they’re pushing for a URL shown in a navigation bar to represent the origin of a document, rather than where you attained it from. (Which is already true of SOCKS and HTTP CONNECT proxies, of course.)
Personally, I’m for that change, because if it becomes a predictable part of clients, formats will emerge to attach signed origins to more document types (e.g. images; PDFs) and then people won’t be able to do easily lose the source of these documents. Imagine Wikipedia being able to automatically cite the “original origin” of anything you link!
Imagination is one thing. It's a useless point without it actually being a feature while browsers like Chrome and sites like AMP gateways do actually hide the "original origin" you speak of.
> Or, from another perspective, they’re pushing for a URL shown in a navigation bar to represent the origin of a document, rather than where you attained it from. (Which is already true of SOCKS and HTTP CONNECT proxies, of course.)
Very good points, but the devil lies in the details. Preserving authenticity of web content through public-key cryptography is of course a welcome change. But HTTPS is for encryption from the client to the server (Transport Layer Security). So browser UIs were designed so a lock next to a URL meant you securely connected to the host of the URL. That's what we've been teaching people browsing the web for more than a decade.
Subverting the meaning of URL bars and HTTPS indicators to make Google's control over the web less visible to the end user is of course a very politically-motivated choice and is yet another direct attack on the web as we know it to promote corporate interests. I would be all in favor of adding cryptographically-enforced authorship information in browser UIs, but would rather add a clickable "person" icon next to the HTTPS lock.
So in the end you would end up with visual indication that a secure connection was established to the Google AMP gateway (or whatever website) and that this website is sharing content that provably originated from somewhere/someone; you would just have to click the button to figure it out (just like for TLS connection details).
Also, if authentication of content is a concern or interest of yours, maybe content-addressable systems such as IPFS/DAT (among others) could interest you.
> formats will emerge to attach signed origins to more document types (e.g. images; PDFs) and then people won’t be able to do easily lose the source of these documents.
There are already standards for doing precisely this on the web. ActivityStreams with Linked Data Signatures comes to mind, for example. But these corporations have no interest whatsoever in building an open and federated web.
We're talking about Google. The company who, for example (among many other scandals) stopped federating their XMPP server (GTalk). From the switch of the button, they consciously chose to prevent millions of people who were previously chatting just fine to do just that. If that's not a manifestation of evil corporate power, i don't know what is.
Federation is nice, but going out of your way to hide that federation is occurring from users is a bit too much. Doesn't it matter who gets to see the full URL you are accessing (especially after efforts for DNS over HTTPS)? At least it could put an icon in the URL bar that can be clicked to show details about who served the content.
That could be a useful feature for decentralization, we could finally serve a mirror of a blocked site with its real origin. Of course, it will probably not be designed by Google to be any useful I guess.
Yeah that's the problem. Bringing more federation to the web would help defeat censorship by making it easier to relay/aggregate information while retaining proof of authorship.
This is the thing I would want to ask Matthew Prince about if I ever met him. Cloudflare working on AMP really surprised me, I would've expected them to make some sort of de-AMP product.
Yes these "improvements" in Chrome are meant to make google the defacto interface for using the web.
Imagine a world where 99% of users do not have any concept of URLs or any other fundamental WWW concepts. Instead, they open Chrome type whatever they want and get the results. It is not difficult to imagine a world where the non-tech users do not think of google as search engine but as internet in itself.
Except that Google has to pay Apple $12 billion a year to remain their default search engine, a price that has increased every year.
What chrome did by training users to enter their searches in the address bar (via the default tab’s fake google search box which moves the cursor to the URL field) was stupid, expensive, and probably undermines Google’s ability to be a destination web site in the long term.
At this point, the overwhelming number of devices are not Google’s. There is no reason to think that in the future they will be. After this anti trust stuff their influence over android may be even less than it is now.
>After this anti trust stuff their influence over android may be even less than it is now.
Sadly then Android will probably falter. As much as things are a shitshow, the unified app ecosystem is a big deal to allow competition for smartphones themselves which allows users to switch without getting burnt on app costs. Otherwise corporations are all about vendor lockin and fucking over the consumer for short term gain because greed is king these days.
You don't have to imagine that world, merely adjust the ratio of people living in it. Are we at 30% of the population living in that world already? Is the ratio progressing towards 99%? How quickly?
My wife already does this. She hates typing urls, even when she knows them (eg Amazon, Home Depot) and doesn't really see why they are necessary when she can just Google everything.
I do hope you tell her the prices she pays are significantly higher because amazon and home depot have to pay for her arriving via google.
Some google clicks are $25 each! You can be sure its you that pays that bill in the end.
Better hope not too many people looked at what you want to buy before you did. ;)
Someone should fund a "cash back for not using google" browser extension.
Amazon has smile.amazon.(tld) that sort-of serves to nudge people away from search - doesn't give cash back to you, but donates to a nominated charity.
I don't use Safari on my MBP for exactly this reason (lack of extensions being another). Apple has always erred on the side of "obscure everything that even remotely reminds users of PCs". The current iPad's marketing slogan is "What's a computer?".
They've introduced mouse support (in 2019 no less), but even then it's disabled by default and buried in the Accessibility settings menu.
In Japan, very rarely do I see URLs. Most of the time, it's a keyword in what looks like an input box and "検索" (which literally means search) in what looks like a button. It's been this way for as long as I've been here (> 6 years).
Edit: come to think of it, a keyword in Japanese is easier to remember/type than an url in latin characters. Sure, now there's IDN in domains and TLDs, but that wasn't a thing until relatively recently, and that's still harder to remember/type than keywords.
Apple did this a while ago. Safari only shows the domain and the lock icon, nothing else.
I get the discussion about Google changing the web to something that pleases their pockets over everything else, but I am not so sure whether that's the actual motivation in this case. Non-technical users can't do a lot with the rest of the URL anyway and even I as a technical user didn't mind Safari hiding the path and scheme up to now.
Yep. I've found the dissonance here particularly jarring - collective shrug when Apple hides the URL, but if Google does so we get huge outrage and assumptions that this must clearly be done primarily for malicious reasons.
I know it's fashionable to hate Google right now, but this reaction seems a little over the top.
My only defense is Safari on iOS - the screen is so small, showing the full URL even with the pathname grayed would make the URL seem bloated; at least seeing the real domain is better for security and UX. Can't defend the full URL being hidden by default on desktop, though.
I think the difference is market share. Chrome is at an estimate 60-70% of all worldwide browsing. On desktop, it is over 70%. As a result, most developers (the primary audience of Hacker News) use Chrome as their primary browser so they tend to see what their users do. Safari has a fraction of that; 15% overall and 5% on desktop. The truth is that this is being discussed with Google because it affects the vast majority of users whereas previous changes affected few (and even fewer in the Hacker News core audience)
The short, unkind answer is that people shrug because Safari is not big enough to care about. You'd see the same apathetic response if Microsoft hid URLs from Edge.
> collective shrug when Apple hides the URL, but if Google does so we get huge outrage and assumptions that this must clearly be done primarily for malicious reasons.
Who uses Safari? People depend on Chrome on all sorts of platforms. Screwing with the URL field affects millions of people.
We are heading to AOL-type internet where you enter keyword in address bar and go to a single direction -- shopping, weather, search, stocks, email, etc.
I guess the direction is appification of the Internet. On your phone, you launch "Facebook" and "Instagram", not /mnt/apps/facebook.apk and /mnt/apps/instagram.apk. Likewise, on the internet, they want you to visit "Facebook.com", not "https://facebook.com/index.html".
Yeah, it's about products for the most. (Also, what happened with human readable URLs? Even, if you'r driving your single page app on fragment identifiers, you could route them internally based on the path info.)
On a more generally level, I think, it's probably more about amplification vs augmentation, the kind of human-computer symbiosis, what this dialog really is. (Actually, I do prefer Licklider over Engelbart.)
Take for example a middle-aged worker in India, who has now one of those new, cheap, not-that-smart phones on a $2/month mobile plan. S/he connects to streaming services via voice search and communicates by the press of a single button. For the first time in her/his life, s/he's able to reach out on her/his own. This is certainly empowering. There's much to be said in favor of such interfaces on devices like this.
However, Licklider's vision of a human-computer symbiosis was of a completely different nature, of partners in cooperation on equal levels. This was not about amplification in this sense, it was about mutual comprehension and directing work flows, with the later heavily depending on the former. This is the desktop environment. Here, simple amplification models are ultimately doing more harm as they may be convenient.
Now, where are the growing markets, what should a company care about? However, this may be a question and an answer too simple. When will our Indian worker gain access to a laptop? The answer may be quite well, "never". Is this still the same market, the same product, the same network?
Exactly this. But even as the URL is irrelevant for apps it allows the web as a publication platform to cohabit with the one as an app platform. Removing them is actively pushing for the app platform.
Maybe in the future we’ll have 2 different apps with a UI optimized for each usage, ie Chrome for Apps and Chrome for web, even if the rendering is the same.
You meant index.php, not .html - FB is still largely based on a variant of PHP. If you try both constructs, .html will return their equivalent of 404, whereas .php will display the main page without complaint.
> It may be also easier to parse than an overloaded omnnibox/location/search/navigation/security/menu bar.
I think the point is that Google doesn't want users to be able to parse this much information. Keep the people dumb, if you want to call it that way. Or uninformed, uneducated, whatever.
"Enter whatever you want to know or see in this field and we will do some magic so you get what you want".
That's where they want to go, no doubt about it as Google probably profits the most from technically-uneducated users of its services.
Particularly given that you have a whole generation now that grew up with these technologies, the proportion of technically litterate users will go up over time not down.
I'm not that optimistic. Technical literacy tends to peak with literate/interested users who saw the rise of a technical medium. (E.g., I guess, my grandfather knew more about the intrinsics of a radio, even if I try to educate myself watching tube radio repair videos. High frequency stuff was generally sorted out, before I was born. I'm a high frequency native, but not necessarily an expert.) As things start to become more complex and tend to be more tightly integrated at the same time, the underlying technology somewhat vanishes. If you are faced with this as a fait accompli, it isn't natural anymore to investigate what is ultimately an opaque module.
However, this opaque module is our universal means of production, communication and creativity. We may have to counteract what may be a tendency towards simplification and taking things for granted.
I don't know... I've known the internet since before the web was the internet. I know there's irc, ftp, etc. To those that grew up with the web, there is nothing beyond the web. The web is the internet. At the rate we're going, in time the new generations will rarely see a URL and won't know what they are exactly or what they're for. Google will be a necessary gateway to access anything on the internet.
But what you are describing is a tiny insignificant minority of very technical users from the 90s, amongst a population who didn't understand it at all (cf all the 1990s movies where the actor turns off the monitor instead of the computer!).
What I am trying to point at is the vast number of users born after 1990 that have a decent basic understanding of these technologies. And within that generation, the portion that has a similar deep understanding to 1990s users is a population which by absolute size is several orders of magnitude larger what it was in the 1990s. And it is going to keep increasing.
> the proportion of technically litterate users will go up over time not down.
I take the proportion to mean "out of all internet/computer users". That has gone down, and that's the proportion that I think matters, as it's what dictates the direction internet/computer technologies go. The proportion is going down, precisely because more people are using these technologies and the majority are not willing to learn as much.
Growing up with technologies is not enough to be technically literate. I would say it's the opposite, you grew up with it so you don't really think about how it works.
If that would be true, we would see much more skepticism against home assistant technologies, social networks, "funny apps" and alike. I think people fail to grasp the full impact of technology on their lives, although (or because) they are surrounded by it.
Don't you think that the worlds-largest provider of search services, ad services, mail services, video services, smartphone operating system vendor and more would profit, one way or another, if people are technically uneducated and don't understand the things that are happening "behind the scenes" of their browser?
What's gained by people understanding what's happening behind the scenes? We don't expect people to understand the details of how their car engine works, or how digital radio signals are encoded and decoded, why should we make it necessary that they do so for the internet?
So long as sites that aren't using SSL are highlighted, and its not possible to spoof the browser into displaying a domain you're not looking at, I honestly don't see what benefit the typical user gets from seeing https://www.google.com vs google.com.
> We don't expect people to understand the details of how their car engine works
You may be amused, but in Europe the technical details of a car engine, transmission, breaks, etc, and how to perform a quick repair have been part of the drivers test (and was still, when I made mine). It was a three parts test, involving a technical part, a legal part, regarding rules and laws regulating traffic, and, finally, a practical test. (Similarly, understanding how analog radio works was part of high school curriculum.) This was based on the conviction that dealing with a technology also afforded a basic understanding of it, not the least for legal/liability reasons.
Analogously, just making that red "No SSL" alert "go away" may be not enough. It may be preferable, if users were to understand what this is and why it is important, what the possibles consequences are, etc, if they would be actively looking for this, if they would and could confirm the mode of their interaction. Not the least for legal/liability reasons.
(Teach them first to never engage in any critical interaction unless there's a verified, secure connection as indicated by such and such icon or signal. Facilitate them in asserting this. Then put on an additional warning light, but don't replace the former by the latter. There's no "easy mode", no "My First Sony" – which was, by today's standards overly complex, anyway –, when it comes to your livelihood.)
In this day and age, you're supposed to connect to your car via CAN-bus and perform tweaks with firmware and find failures via data from sensors. Even then, you're mostly operate on a high level, like, turn off ABS or whatever subsystem. Guys who started with old-school "get this car jacked" by getting hands greased finally ended up with patching firmware till morning. A quick car repair you mentioned unlikely goes beyond swapping a tire with a spare.
> What's gained by people understanding what's happening behind the scenes?
A critical mass of people able to make informed technical decisions — which is what both business & democratic self-government require.
> We don't expect people to understand the details of how their car engine works, or how digital radio signals are encoded and decoded, why should we make it necessary that they do so for the internet?
We don't expect them to know the deep details of those technologies, but we do expect them to know that flammable gasoline powers an engine; we do expect them to know that radio signals can be received by anyone with a receiver (and a key, if encrypted).
No, but technically educated people are able to find alternatives, question why things are happening the way they are and improve their experience, by something as simple as using an adblocker. Big tech companies, I think this is not limited to Google, are not interested in that. People shall use their gated communities and that's enough of the internet.
> Seriously speaking: If browsers are simplifying the URI scheme for the alleged benefit of users, how do we expect these users to know anything about addresses?
This has always driven me bonkers about the ProofPoint URL-obfuscation 'security'. My work started enforcing this recently, and it drives me crazy:
1. rather than training you to think more carefully about links in e-mails, it trains you to click blindly, because the software says it's safe;
2. it obscures the target of the URL, so that a URL that even a novice would recognise as fishy becomes a garbled string like any other—here's an example (but safe) mangled URL:
3. it filters outgoing links, meaning it assumes that the university employees, not just outside spammers, are hostile (but maybe this is a reasonable assumption?).
> meaning it assumes that the university employees,
Phishing is definitely possible inside your email org, anything from a malicious browser extension to desktop malware might be geared to send bad links to all users (or a subset) within the logged-in Gmail/Outlook directories.
But ya, that sounds like some pretty bad anti-phishing software. Sounds like vendor lock-in more than anything, especially if it's replacing the email links on the mail server itself.
What are the chances it just runs the domain through Google safe browsing? :P
> What's the general lesson taught by such redactions by the browser vendor?
That it drives traffic to their search engine and increases their revenue. A lot of people always use a search engine to navigate to sites anyway, even if it's a site they've been to many times before and even if the URI is as simple as https://somecompany.com.
Heck, I've seen people, on multiple occasions, load up a browser tab with the 'default' address of google.com, enter a url into the google search box (not the browser search/address box, the in-page google search box), and then proceed to click the result it gives them.
At the very least, chrome new tab pages send your typing cursor to the Omnibox if you try to use the search box. Doesn't work if google.com is set as the new tab page.
I don't know why the parent is being downvoted, but this is completely true. It's not hard to figure out that one of the reasons Google is doing this is so that more people use search engines to find sites, rather than just straight up enter the URI
Google doesn't need to have their own agenda to do this, they are simply standardizing the usage pattern the majority of users are accustomed to. People don't know what URIs are, they type "facebook" in the large search box at the center of the browser window, then click/tap the first search result. Many even type "google", click the first search result, type "facebook", click the first result.
However, before search boxes in the location bar facilitated such behavior, users were perfectly able to navigate to sites by the use of URLs. (And these were much less experienced users then.) It's not that long ago that human readable URLs (especially the composition of the path) has been a "hot" topic. – There's a possible discussion about de-facilitating users by convenience.
They were also predominantly more computer-savvy users back then. And it would probably be very difficult to convince users that it's somehow better to type "facebook.com" instead of "facebook". The extra letters appear completely redundant. Indeed, from a technical perspective, I fail to see why the "big names" shouldn't just be hoisted to top-level domains.
Then, typical internet users were secretary level office workers, mostly just trained on the job how to use MS Office. Still, it worked, for the most part.
(There was a learning process. In the early days, there was reoccurring theme with clients and new websites, which were also their first intense exposure to the web, where first as much as possible pages had to open in a new window -- "I don't want my page to go away" -- followed by the moment, the client learned the back button and asked, why it wasn't working with those links.)
Regarding big names hoisted to TLDs: They are free to buy one, if they care. (For those big names, it's a small investment.) Otherwise, who is to decide, who is big enough and who just fails the test? Is this discriminatory, may it even hurt a viable interest? Is this still net-neutral? And, ultimately, I'm not sure, if single-word host names weren't rather apt to add to the confusion. (Autocompletion works just fine for this, anyway.)
If I recall correctly ICANN didn't allow custom naked gTLDs to resolve to web pages. Just read about when people were upset that they wouldn't be able to call their local box http://dev/ because the top domain is now owned by google.
Rather than an omnibar returning search results, would be interesting if it returned from somewhere like your bookmarks & if you typed in something not in bookmarks, it could show you results & for the first link you click have a dialog "Do you want 'Facebook' to always link to this site?"
You could show all the nitty gritty details of the URL and more tech-savvy users could learn over time, but non-tech users would have a simple alias that always takes them to a trusted site.
Something like a cross between AOL terms and trust-on-first-use system.
Gah. The metaphor of calling screen area "screen real estate" has always been a pet peeve of mine but this "20 vertical pixels of virtual real estate" has me simultaneously impressed and horrified. Why not just "20 vertical pixels"?
That, and calling battery "juice". It shouldn't bother me this much, but it does. Same for saying "daily driver" instead of "main phone".
I guess they are meant to color up speech / break monotony, but it gets irritating when metaphors are overused and nobody just says "battery", so in the end, just using the boring regular word feels more refreshing. Similarly for articles that keep overuse words like "whopping" instead of usual formal words (to appear more friendly, I guess), or infallibly say "invest" for everything rather than just "buy".
Used here because of the implied greed and speculative use made of the resources, as well as for the idea of an alleged market of competing interest.
Empirically (I've done the comparison), desktop browser UIs have been growing in vertical screen occupation since Netscape Navigator 2 quite continuously, including the most recent versions, while reducing the amount of information and control provided by the interface. (That is, tabs introduced a new demand of some legitimacy. – However, another vertically stacked horizontal bar isn't the single imaginable solution to the problem. Previously, there had been sidebars with extracted link lists, we can do it for bookmarks and reading lists… We're even better at scanning vertical lists. There's quite some dogma involved. Arguably, a horizontal tab list shouldn't exceed 5 items.)
MOST PEOPLE dont undersand what is or isnt correct demarcation, and what order to read things in.
. / - _ (dot, slash, dash, underscore) all mean very specific things, and different things in different context (period specifically), and if you dont have their meanings memorized and what order to read things in, (right to left from the tld, pausing at each period, then left to right from the first slash, pausing at each slash) then URLs look like illegible nonsense.
But there it's really just about 4 major separators and the scheme prefix! (Dot, slash, question mark, hash mark.) – Mind that we're talking about users who are already able to read and parse sentences, including punctuations. There's no special skill set required, just a few bits of information.
Users are able to parse post addresses and apply specific meaning and rules to the various parts. (Compared to URIs post adddresses are exceptionally messy und full of edge cases and exceptions.) Before mobile, users have been able to identify phone numbers by country, region, city and district. Where is the crucial difference? Isn't just because they are told it does 't matter and they won't understand anyway!
> how do we expect these users to know anything about addresses
Here's an experiment: walk down any street in the US. Actually, scratch that, because there's a good chance that you live in SF or Seattle and your streets are filled with computer programmers.
Call a random number in a random area code. Ask them what the difference between HTTP and HTTPS is.
Given the state of civics education in many parts of the country, a significant portion of people may not know how laws are passed. That doesn't give governments carte blanche to start passing laws in secret, because "people already don't know how it works".
Which goes to my point: wanting users to be better educated is wishful thinking.
Do you have a plan to educate even 90% of users? That would be a dream come true, but that still leaves a gaping hole of 10%. I would guess 90% of programmers could explain the difference between HTTP and HTTPS (the other 10% being the dreadfully incompetent).
Right now I'm guessing we're at 5% or so and no amount of wanting that to not be true will change that.
> Do you have a plan to educate even 90% of users?
No, but I could easily create one if I was paid to put my mind to it. Just as a hobby, I already educate peers about how a lot of the internet and web and technology work anyway.
I don't really have a desire to add to my life the stress of dealing with the terrible decisions, past and present, of "modern" UI driven almost completely by shady marketing tactics encouraged and often forced by business. I certainly don't have the desire to do so without compensation to offset that stress.
> Seriously speaking: If browsers are simplifying the URI scheme for the alleged benefit of users, how do we expect these users to know anything about addresses? Isn't this rather undermining security than enhancing it?
Users already don't know anything about addresses. Even security professionals botch the "look at some visual indicator of connection security" all the time (see SSLStrip). As a security feature the URL bar is almost entirely useless for the huge majority of users.
URL bars also don't really tell you where your content is coming from. Iframes don't have URL bars. All that javascript downloaded from somewhere when the page loaded doesn't have a URL bar. The web isn't made of uniquely identifiable documents anymore.
There was a plan by Google that makes sense and we should move forward to, it's a shift in mindset:
- current situation: http is normal, https with bad certificate is bad, https with good certificate is good
- future situation: http is bad, https with bad certificate is bad/acceptable, https with good certificate is normal
i.e instead of telling people to check for the padlock icon and the green name that should be everywhere, tell people to check for the red warning that indicates a problem. I think it's lifting the expectation to something more secure by default.
Understanding the basics of a secure connection and what possible consequences are is part of liability and being of sound mind. Teach them first, what it is and why it matters, to never engage in a critical communication without a verified secure connection as indicated by this and that signal. Facilitate users in asserting the mode of their electronic interactions. Then put an additional warning line on. But never replace the former by the latter. There is no "easy mode" when it comes to your livelihood.
And, if users are able to grasp these basics (and sure they are – heck, not that long ago drivers had to understand and to be able to describe the workings of a car engine and transmission in order to pass a drivers test), understanding why a basic connection has become unbearable, shouldn't be much of a problem.
I assist the notion because I've seen people get phished by pages with TLS certs for a long long time. The teaching that "HTTPS == secure" is just malicious and has to be dropped.
Which is, why there should be more, not less information available to the user (without clicking the TLS icon and probably proceeding into a sub-dialog.) Just establishing "any TLS" as the new normal doesn't especially help in a world, where you get a certificate for free with a new domain.
Edit: E.g., have a dedicated status bar, displaying to whom the certificate was issued by what CA and what the security level (level of identification) is, possibly color coded. (Your bank must have a red certificate issued on its name, otherwise, run.) Prefetch certificates for links and display them side by side with the link target in said status bar, etc,...
> Just establishing "any TLS" as the new normal doesn't especially help in a world, where you get a certificate for free with a new domain.
What we are working towards is the thing users had been assuming was true all along, that when they visit google.com that's google.com, else why would it say so? They are astonished to learn that Tim's toy hypermedia system didn't do that and we're only fixing it now in the 21st century.
Whereas you're talking about the sort of imaginary world where everybody begins a novel by perusing the copyright notice just inside the cover, to discern whether they are in fact permitted to read further.
Users aren't going to try to figure out what your "dedicated status bar" means, they'll be annoyed that they can't switch it off since it wastes space with things they don't care about and then they'll just stop seeing it, the way they don't really see those channel ID logos (DOGs) on a modern TV channel.
Your "prefetching" idea is not only terrible for privacy it also doesn't actually do what you probably expect it does, because web pages are too complicated and have been for decades, for this to be effective. In practice because there are so many HTTP transactions _only_ the checks the software can do automatically are worth anything, any time you start talking about how a user should be examining certificate details you're talking about an idea that can't actually work.
> Whereas you're talking about the sort of imaginary world where everybody begins a novel by perusing the copyright notice (…)
Not so much. I'm referring to a world where different weight and importance apply to different matter. There are things that are important and things that are really important (like your bank connection) and it is really the latter which matter.
> Users (…) e annoyed that they can't switch it off since it wastes space with things they don't care about (…)
With much the same anticipation we may expect users to hate the Start menu in Windows, since it occupies some space and they neither understand nor use it. Now guess, what happened. (Turns out, users are interested in knowing how to control their devices.)
Regarding prefetching, this was just escalating the idea. (In actuality, I'm not a friend of prefetching for any means. Don't waste bandwidth on a general level, just because you can't serve your assets in a time span that had been barely bearable with modems.)
You know that this was an actual case with some publicity? Regarding throwing a bunch of information at the user, while a binary "good" vs "bad" would do, this is ultimately an argument to authority and neither enabling nor enlightening. Moreover, who became this authority on what merits and by what process?
MS removing the Start button on the Windows 8 desktop in favor of the tiles of the Modern UI start page (formerly Metro) for a more mobile-like, simpler, and allegedly purer/cleaner app experience, massive user protests, and MS finally reintroducing the Start button and the Start menu. Never heard of this?
Oh, if that's what you were talking about then I don't understand your comment at all.
The start menu doesn't occupy space during normal use. (That's why I though you meant the start button.)
You were talking about getting rid of something that occupied space, but they didn't get rid of the start menu. In fact they made it bigger when activated.
And the changes they made had nothing to do with whether the user can control their device.
So I don't think any argument based on the windows 8 start menu is suitable for saying that an extra bar is a good idea.
The button did occupy some space and MS apparently went for a "cleaner" (and simpler) look. Also, the traditional start menu was found to be too complex. (The entire Modern UI revolution with Full screen apps in the front was about a mobile-like simplification on the desktop.) I do see some parallels to this.
> Which is, why there should be more, not less information available to the user (without clicking the TLS icon and probably proceeding into a sub-dialog.)
We gave all the information to users already, they didn't know what to do with it.
> Just establishing "any TLS" as the new normal doesn't especially help in a world, where you get a certificate for free with a new domain.
Establishing non-TLS as insecure is the end goal.
> possibly color coded. (Your bank must have a red certificate issued on its name, otherwise, run.) Prefetch certificates for links and display them side by side with the link target in said status bar, etc,...
Colors mean different things in cultures and have to be manually learned. Due to frequently broken TLS configurations and similar users will ignore the warnings as well. I really really don't think warnings help, either a fail-shut behavior is enforced (with annoying bypasses) or it might as well not exist.
Safari actually does (or did it): normal (grey) https icons for normal certificates as well as color coded icons including an abbreviated version of the party the certificate was issued to. (Compare this to variations of shield icons you have to click onto for further information in Firefox, which deems me to be much more cryptic.)
As for colors or other signaling, yes, these are mostly cultural and are to be learned. (E.g., red isn't naturally "bad" or a warning, it may be rather "important") However, this is such a crucial technology, isn't it a viable option having a set of rules, which easily fits on a single page of legal paper in big print, and teach this to everyone? (How do people manage to cross the street, if they can't be expected to incorporate a simple set of rules? We, on the other hand, rather teach them to be careless and suspend judgement. Notably, I'm speaking of the desktop environment, mobile has a completely different set of requirements and widely proliferated alternatives, like dedicated apps for crucial tasks.)
And, regarding don't care about the identity as long as it is TLS, I'm not so sure.
To that extent, the "urls aren't specific patterns to accurately copy, they're just words you type like Google searches" user approach promoted with this design is equally unhelpful...
To be fair URLs look gibberish for most of the world because pretty much no browsers do IRIs and a lot of services are actively hostile to IRIs and IDNs, it is a step up in usability for most of the world to see "just words you type" instead of most URLs.
It's a step up in usability for everyone, including native English speakers who know what https means but can't remember the precise url, but a step down in security from type magic incantation correctly
Whatever this move is supposed to do, I guarantee you its #1 goal is to help Google first, and this being some kind of positive for users (not necessarily a net positive, just "a" positive) will be a side-effect at best (after all, everything in life comes with some kind of positive and some kind of negative benefit).
It's how Google has approached most things lately. "Oh yeah, we're removing this API for user security. No, it has nothing to do with the same move killing ad-blockers in one blow, why do you ask?! And we're very offended you even think we'd do it for that reason! We'd never..."
> If browsers are simplifying the URI scheme for the alleged benefit of users, how do we expect these users to know anything about addresses?
You've accurately assessed the current state of affairs. The current average user knows essentially nothing about addresses. They might have a vague sense that domain matters, but that's about it.
Displaying more information that they do not understand does not enhance the security of the average user. Displaying more information useful to the exceptional user (i.e., us) has to be weighted the degree to which it leads average users to a learned helplessness reaction.
I don't see, why this should be a matter of capitulation, especially on the desktop environment. Users have been able to maintain a basic understanding of similar schemes and were able to operate them for centuries. Compared to most real world schemes and regulations, basic URIs are exceptionally regulated (there is such a thing as a well-formed URL) and easy to understand. What changed?
(My go to example is the level of understanding required to operate a kitchen and to prepare a meal, which is surprisingly high, but not worth even mentioning. Users do understand the difference of a spoon and a knife, as long as they understand the relevance of this to them. They are even able to operate an oven and to deviate from complex recipes in creative ways, while finding and managing all kind of equipment and managing time critical work flows. Is a similar operational understanding of what may be the most important technology of our time really asking too much?)
You're absolutely correct that URIs are well-regulated, well-defined, and easy to understand for those with appropriately technical backgrounds. You're also correct that users don't need to understand in significant technical depth how URIs work in order to use them for most daily operations.
In my opinion, what's changed is that someone won the political battle between "URIs are so simple and well-defined! Everyone understands them." and "Users don't understand URIs" camps.
With that said, it's perhaps worth considering that most users might regard computers as essentially magical and fundamentally incomprehensible things. In such a context, could it be possible that it might actually be too much to ask for your average user to have a good operational understanding of something that seems to them to be fundamentally incomprehensible? A great many people have already learned helplessness in the face of technology. Could it be that decades of exposing people to the elegant simplicity of URLs has not created widespread operational comprehension?
(What's the alternative? Just stumble into the pits the dogma didn't provide for? If I'm hurt, it's the will of the Web? – "Nexus vult", as the web-fathers put it?)
Also, training the user to treat the URL as text, so that you search for domains (type "amazon" instead of "amazon.com"), is better for them. The "URL" bar is for Chrome also a search bar (and every domain you enter is sent to Google "for search suggestions").
Yes, I also believe this is the primary reason Google is doing this. They want to hide urls with referrer tracking, AMP URLs, and other such things from the user.
For Google, there is no money to be made when people know how to enter URL in a browser's address bar. Money is on users searching for a company using a name each time they want to visit a company website. That way Google can bombard users with adverts. When they click on those adverts, Google get paid. Now companies will be forced to advertise using Google AdWords. So that they can appear on top of the search list. Otherwise Google will show ads of their competitors first.
Also there is terrible URL usability with google AMP pages. Google has every reason to do away with URLs as a user tool. I might be going off in the deep-end now, but if there are no URLs, why ever leave google.com at all? AMP is basically an early opt-in of rebranding the internet as google.com. Of course its all being done in the name of security, speed, and data-saving. Truly altruistic motivations.
So apparently Google rolled out an update where they 'm.domain.com' turn into 'domain.com' in the omnibox. In what world is this acceptable? How can they assume that 'm.' always means mobile and 'www.' is the same as the root domain for all hosts?
Let's assume that you have a blog platform offering subdomains for each user and 'm.blogplatform.com' is available. Now, any user can get that subdomain and impersonate the homepage because Emily from Chromium decided that eliding parts of the URL without any spec is a reasonable decision.
I hate to break it to you, but if you allow www.companyname.website, companyname.website, and m.companyname.website to be owned by different entities, your users are already being phished.
I'm not sure why everyone is so up in arms here. I don't see how this change is detrimental to the web or somehow good for Google. Hiding the "https://" seems like a perfectly fine idea as long as there's a clear way to distinguish between https and http pages. Safari's done this for a long time; https pages display a tiny padlock icon and http pages display a much more prominent "Not Secure — " prefix (i.e. the obvious display is when you're insecure, which is the right way to go about things).
Hiding "www" seems less meaningful. I'm not really sure what the motivation is there, beyond the fact that the "www" prefix is mostly just aesthetics. My best guess is they want the url to start with "google.com" instead of "www.google.com", except that's not helpful from a security standpoint at all and might be slightly detrimental, if it trains people that the very first word they encounter is the most important, as paypal.whatever.com is not in fact paypal. But a lot of domains already elide the "www" anyway.
Of course, in both cases I am assuming that putting focus on the URL bar will display the full URL.
> Of course, in both cases I am assuming that putting focus on the URL bar will display the full URL.
It won't. From the issue tracker: "The full URL is also revealed by clicking twice in the URL bar on desktop," in other words, merely focusing the URL bar won't be enough.
Ok this is bad. On safari, I think it makes sense. They just show the domain and a lock icon if there is https to make the ui cleaner, and if you click on the adress bar, the full url shows. It's a good user flow, and it makes sense. This google move does not make sense, especially when people are so used to the full url showing, its going to confuse a lot of users.
The website host can absolutely distinguish between the two (two clicks on title bar). I guess this change has another upside of forcing website hosts to have working web server configurations
I think the main issue people have with this all is that we all f*cked up by making Chrome into the new IE6 by pushing it to a monopoly position and only now realizing that. So now we get angry at Google every time we are confronted by our mistakes, for instance when they push a change upon us that we are not especially thrilled about.
Also, we love change as long as we are the ones doing it. All other changes are bad.
I've been feeling like this for a while because at least back when chrome first hit it seemed like Mozilla and Google were mostly competing but modestly equal interests. Then Android took off. Then even the fringe browsers became webkit to keep up. Then about this time Google started testing the waters for this level of pushing through standards, and they've just kept pushing to what should violate something other than morality, but I don't feel like there is an appropriate mechanism to slap their wrist and push back. I really wish that most of these chrome standards would stop smelling as much "I changed it because it didn't work for my current project and I guess I'll force a standards change to meet my project deadline." I have not felt like most of the Chrome changes I have observed recently are given more thought than an intern/junior dev would give it and they don't have a Linus to tell them to stop breaking userspace.
I'm not sure either. URLs obviously aren't going anywhere, even if Google makes this UI change. Social media and email still exist. People aren't going to suddenly stop sharing links to news and blog posts. If Google Chrome really makes it too hard to copy the URL of the current page, then even my least technically literate friends will switch to another browser – they all send each other URLs all the time.
This part is puzzling though:
> The full URL is also revealed by clicking twice in the URL bar on desktop, and once on mobile.
Twice? I can't picture how that will work as a UI. What happens after the first click?
"Security sensitive context" are the key words here. On mobile devices, you get, what, 300 pixels across to show the URL? You want to show as much relevant info as you can, and if you can summarize "https://www." then you get more real estate for the domain name itself.
It seems reasonable to me. I get the hate for FAANG, but some people just lose their minds.
To me the problem is that it obscures the page URI, making it an unreliable source for exactly where you are.
Whether this is a real concern for usual end users no I don't think so. It's mostly more annoying for the HN crowd who need to see it for development or are very security sensitive.
Its all fine until you have to explain to your loved ones how to read domains and the difference between an email link with random-server-name.paypal.com vs. www.paypal.random-name.com
If we remove meta data like this from everyday browsing maybe it will be harder for people to even grasp the idea of how domain names work?
Google doesn't want you to know that, Google wants you to rely on GMail to get rid of all emails Google doesn't want you to read, and rely on the Google Safe Browsing Database to decide what websites are in Google's interest for you to view. Knowing domains is also harmful to Google, you're supposed to trust Google's search to find websites (and helpful advertisements for competing sites that bribe Google better, now that they neutered adblockers).
Telling people to look for https:// as a way to indicate that they're secure isn't good enough. There's nothing stopping a malicious party applying an SSL certificate to www.paypal.random-name.com. Browsers need to inform users based on actual information that they have (eg the SSL cert, reports of malicious domains, etc) and display that to the user.
Essentially Google know more about the state of a website than a user can learn from the domain alone, and they should display that to the user. Once that's happening the various parts of the domain are less important. The downside of this is that it increases Chrome users reliance on Google.
Telling users to carefully type https://www.paypal.com is somewhat less user friendly but a lot less phisher friendly than the "just type the company name, the other bits don't matter and the browser will find the right site for you and tell you it's secure" pattern.
Paypal has good SEO, but not every shopping cart does, and its not like Google are manually vetting the content like AOL keywords did...
My preferred solution would be an open API standard that companies could use to provide independent information about domains, that browsers could use to pull data about domains, and that companies who index the web could use to update whether or not they trust domains. That way anyone could set up as a source of information about whether or not a domain is trustworthy.
I'm not in a position to actually do it though, and Google are (for themselves, without sharing the API), so for now that's the best we've got. Maybe someone who has the resources to compete with Google will step up.
> open API standard that companies could use to provide independent information about domains ... anyone could set up as a source of information about whether or not a domain is trustworthy.
and how is a user supposed to know who to trust for this information, if there are multiple sources?
At the root, the problem is one of verification - that who you are communicating with is who they claim to be. At some level, there needs to be trust in one entity as a definitive source (that the site you're looking at is indeed created by the business that the site is claiming to be representing). There has to be some level of trust somewhere...
That's what they wanted when they pushed Let's Encrypt. https meant something before, since a certificate was expensive and required proof of identity. Yes, those methods weren't infallible, but they were good enough. Now we've lost that with free https.
Certificates weren't expensive before Let's Encrypt, several outfits offered free certificates, especially on a "trial" basis that would be adequate for criminals even if it was largely useless to legitimate users.
But expensive certificate were, and still are, available to those with the Apple mindset. DigiCert will sell you a certificate for $218. Lasts 12 months.
And you're probably thinking: Right, that's a _proper_ certificate, that'll assure me of who bought it, and it comes with true security and all this amazing stuff. Nope, that's the same DV assurance that Let's Encrypt gives away, except DigiCert gets $218 of your money, and why not?
If there's a guy wants to buy one glass of water from me for $100 who am I to insist drinking water is free?
Anyway, no, certificates did not require "proof of identity" prior to Let's Encrypt, in fact back then they only required that the CA use "Any other method" a term of art in the rules that meant the CA could use its own best judgement (perhaps clouded by commercial considerations) to decide what was enough to be sure you controlled example.com before issuing you an example.com certificate.
_After_ Let's Encrypt, and with substantial input _from_ key Let's Encrypt people this was reformed to the Ten Blessed Methods (there are not actually ten of them today, but I like that name and it seems to have stuck) in which there are explicit methods defined for how a CA must check that you control the DNS names you want certificates for.
You are living in an all too common fantasy world. A world where you needlessly spend more money to achieve less security because you don't want to be confronted with facts.
> When you paid with your card they knew your identity.
Who's the "they" in that sentence? As it stands, a certificate reseller knows that the Paypal account "some.name.here@gmail.com" paid for a SSL certificate for "www.unrelatedcompany.TLD"
The certificate itself tells you nothing about who paid for it - it doesn't even tell you which email account was used to confirm some level of association with the unrelatedcompany.TLD domain.
Then PayPal has the data and LE can follow the trail. Because it's about LE being able to tell who actually bought the certificate, telling me end users can't do that is kinda moving the goalposts.
However, even in this particular edge case, why hide that there's a prefix (while you, the browser vendor, were able to put up some kind of indication for this with ease)? Generally, highlight, but don't conceal.
It's already hard to explain what an URL is and what they mean, 99.9999% of people using the internet do not already grasp how domain names work and unfortunately or fortunately this time the tech-savy 0.0001% lost.
> We've worked with other browser representatives to incorporate URL display guidance into the web URL standard (https://url.spec.whatwg.org/#url-rendering-simplification). The URL spec documents that browsers may simplify the URL by omitting irrelevant subdomains and schemes in security-sensitive surfaces like the omnibox.
Google is trying to make this obfuscation a standard.
I don't think it's deliberate undermining, I think it's more, "people complained that we were breaking standards, so we fixed it by changing the standard."
The Chrome team is almost incapable of reversing course on their decisions. Even when they get massive pushback and enough negative press to effectively force them to reverse course, it's only ever temporary.
"Widespread criticism over a decision? That just means people aren't ready yet, or they're too emotional to think clearly about our position. Give it a couple of months to a year, and they'll all have calmed down enough to realize we're right."
"People are complaining about breaking standards? It's not that our decision was wrong, people are just upset we didn't check all the right boxes and fill out all the right forms before we made it. It's not real criticism, it's just people being legalistic about the standards process. Fine, we'll play that game."
And then they wonder why people automatically assume the worst whenever Chrome devs propose something new.
There is a positive feedback cycle leading to polarization, and it works both ways. The more people assume the worst, the more likely they are to criticize with only a cursory understanding what's going on, and the more that justified criticism gets lost in the noise, leading to further communication breakdown as more people stop listening. In the worst case, certain things become "common knowledge" that nobody bothers checking anymore.
Breaking that cycle requires putting effort into understanding both sides and going beyond quick comments.
Breaking the cycle can't be done by only one side.
If good maintainers aren't put in contact with good critiques, then they'll eventually assume every take on their work is purely reactionary. But in the same way, if good critics who take the time to write detailed critiques aren't listened to, then they'll eventually get tired and stop engaging.
One side can't put in all of the work, or we'll end up in the same exact situation within another year.
I've seen the conversation around Chrome degrade dramatically even over the past year or two. Back when Chrome accidentally broke web audio, the community was putting a lot of time into brainstorming possible solutions. You had authors behind some of the biggest games and platforms on the web trying to be constructive.
More recently with the V3 manifest, actual adblock developers from the most popular extensions on the store have weighed in, written performance tests, and shared thoughts. But they've been less willing to go out of their way to assume the best of the Chrome team than game developers were in 2018.
There's a very visible degradation of trust, and an assumption that there's no point in engaging constructively with Google because Google just does not respond to criticism.
Back in 2018, I myself wrote up a massive blog post[0] going over the problems with Google's Web Audio changes. I was careful to assume the best of maintainers, that they really did have the best interest of the web in mind, and that they really were trying to make something great. However, I pointed out:
> These mistakes create a narrative undercurrent that will undermine Google's future efforts to get developers to trust them when they're forced to make difficult decisions... Given enough examples of Google dismissing concerns about its pet projects, the public will simply stop believing that the company cares.
So now in 2019, I'm not going to write a massive blog post going into extensive detail about why obfuscating URLs is a bad idea, because I've fallen victim to the same thought pattern I warned about above. I don't believe that Google cares, and I have better uses of my time.
Why should I waste my time writing this stuff, when I know it's not going to make a difference, and when I know Google is just going to do whatever they want anyway? Why should someone like Gorhill waste their time building detailed breakdowns of an extension API, if there's no chance of getting that API decision reversed?
None of this just arbitrarily happened -- even two or three years ago, developers used to engage with the Chrome team with a lot more patience and a lot more good faith. There are still a lot of members of the community that aren't polarized, they're just tired. And if there was any evidence that Chrome developers were willing to listen to people like Gorhill or Ashley, those people would still be willing to step up.
Yes, I agree that it seems like a lot of work with little payoff. When you're just one person in a large audience, I think it's realistic to assume your influence is pretty limited, and the developers are going to do what they think is right.
I think not doing your homework is fine (I am often lazy about this) but it should be combined with open-mindedness and acknowledging uncertainty.
The "security sensitive" information in the URL bar is horrible.
For TLS you need to look for the presence of a single character. For most people, "http://" and "https://" are both gibberish and remembering which one is good is not obvious. For domain names you now need to look past all of the subdomains since mybank.evil.com isn't safe. But don't look all the way to the right since that is usually full of gibberish and everything after the "/" isn't trustworthy either. Oh and watch out for homoglyphs too.
URLs suck for security.
Also I bet you are upset that port numbers aren't included in the displayed URL too.
You need to justify that "So". Showing all the info you have is often less clear than showing the most important parts. Showing the info you have in an unsorted blob is often less clear than having separate fields/fonts.
(Read "less clear" as a synonym for "misleading".)
Imagine if the URL bar showed the full set of HTTP headers instead. That's more information, and it's more accurate and less mangled than showing just the URL. But it's a much worse interface and harder to use securely.
Industry wants you to open apps and click buttons and links, and not worry about such things as bothering with what the URL may be, or gasp, even typing in or editing your own URL. That's too complicated and dangerous for 'normal' users apparently according to them.
Wondering how far they can take this. How long before we read that "Google will replace URLs with the Google search terms you can use to find the page you're viewing"
Sounds like the I'm feeling lucky button will finally be used and abused by Google. I already hate that search is merged into the url bar even in Firefox if I type an internal hostname sometimes instead of trying to resolve it googles it or duckduckgo's the damn hostname even at times where it ends in .com from an intranet url I dont understand why its so stupid in those cases either, Chrome being worse than Firefox in this regard. I dont mind it on mobile where real estate on my screen is already expensive.
From a quick google (ironic, I know), you can disable this by disabling `keyword.enabled` in the `about:config` menu. If you still want a way to search without manually navigating to a search engine site, there's also a setting in `about:preferences` to add a separate search bar to the browser interface.
* Some smart people want to simplify it, but theres too “magic”, and fades away
* General public learns about the underlying tech or at least structure
* Big adoption happens as people trust it
* Some people want to make it “magic”/simple again, but therefore creating a black box
Forward a few years, and the industry seems inaccessible for a long time
* people figure out again how things work
Medical, construction, tech, space, movies.. almost everything has this pattern
People mostly already do this organically, without any encouragement. The vast majority of users don't want to know about "URIs" or other technical details.
Isn't that already the case? The odds of accessing internal sites on chrome are already quite lower, any weird thing on the URL and you just can't anymore.
On Chrome for Android, the URL is not editable unless you press an edit button, which displays a text box. I imagine this is another step in that direction, and if you click "edit", the text box still shows the whole thing.
Compare with how email headers work in gmail. You don't see email addresses anymore by default, but there's a dropdown that shows them.
> Compare with how email headers work in gmail. You don't see email addresses anymore by default
It's a really bad feature. If you have contacts with multiple e-mail addresses on different domains, it takes effort to ensure it's going to the right one.
I really like this on mobile. 90% of the time when I hit the address bar on mobile it is to go to a different URL, not mess with the existing URL. If I need to edit the URL of where I am it is just 1 extra button.
Are you sure this is the case for everyone? On my two android devices, tapping on the address bar still opens the keyboard for direct editing of the URL. I've never seen the behavior you're describing on Chrome for Android, although I concede it may be geofenced or something of the like.
or email to/from/cc in ios mail, where only the person's name is shown. clicking on the name displays a contact card. for people with multiple addresses, there's no way to tell which address is being used.
Yeah this is incredibly stupid. If you have a contact with more than one number trying to call you, the only way to know which number they called u from is the little accent to indicate recently used in their contact card. God knows what happens if they tried to call you from more than one number ...
between this and some other issues, I've switched from Chrome to Firefox on Android. Although, I will admit, the copy URL button good has their now is moderately useful
Fortunately you can undo that in firefox (separate boxes -- ctrl-l for location, ctrl-k for search, and disable the "search if not a url" option), but it's not the default, and that leads to more and more people using bing to search for google to then search for facebook, as they have no concept of the idea of a URL (or indeed a program, a file or a directory) :(
Vivaldi? So just a rebranded Chromium then. Maybe it's not sending private information to Google but it's still leveraging the HTML rendering policies dictated by Google.
I'm on a Mac - if only it were that simple. Firefox still has poor performance on Retina Macs, and Safari is removing support for a bunch of extensions I use daily in the upcoming Catalina release. I use ungoogled-chromium but would love to move to Firefox sometime soon. They just need to sort out the performance issues.
Yes they receive money for having google.com as the default homepage if I remember correctly. The way you worded it makes it sound like Google controls them.
You do realize OP was simply referring to Google having direct control of Firefox financially, not via web standards, yeah? You're implying OP is saying something that they're not.
The comment I replied to tried to imply Google has little control over Firefox ("The way you worded it makes it sound like Google controls them."), the immense control is true, it doesn't matter what OP said in this situation.
They delayed the rollout after people spoke out against it:
"In Chrome M69, we rolled out a change to hide special-case subdomains “www” and “m” in the Chrome omnibox. After receiving community feedback about these changes, we have decided to roll back these changes in M69 on Chrome for Desktop and Android."
I don't think there's any going back this time around...
The omnibox never seemed like a crowded part of the interface, did they do any kind of user testing where they found people were confused or annoyed by this 'https://www' prefix? Or is this just one team's sense of aesthetic ?
People who don't understand URLs will use search to find sites. Obviously I've no clue why Google would be motivated to encourage that behaviour. Must be their sense of aesthetic.
So your claim is that simplifying URLs, which on its face seems like an attempt to make them easier to understand, is secretly intended to make them hard to understand?
Hiding behind more clicks is not always simplifying. URLs are central to the Web as we know it. Hiding important portions of it will only foster ignorance of the structure of Web in current and future users. What you don't know, you cannot use/employ. It becomes "hard to understand."
I don't doubt that Googlers believe this is a useful change, but that belief is borne out of their own vested interest in deprecating traditional web navigation. That's really the best light you can put it in, considering how utterly user-hostile this decision is.
The best light you can put it in is that removing unnecessary parts of the URL makes it easier for less advanced users to understand what the URL consists of, helping them use "traditional web navigation".
To be fair, Google has previously experimented with hiding the entire path part of the URL, which does hinder URL-based navigation by making the displayed text unusable as a URL. However, that's different from this change, where typing the displayed text into a browser's URL bar would normally get you to the same page (unless the site is configured strangely).
> removing unnecessary parts of the URL makes it easier for less advanced users to understand what the URL consists of, helping them use "traditional web navigation".
How? If the users are supposed to learn "what the URL consists of", how removing and hiding all but one piece of it helps?
It's like saying, "to make postal addresses easier to understand, we'll hide everything except the recipient's name and surname".
If I want to go to facebook, all I need to write in the address bar is "facebook.com", but the displayed address is "https://www.facebook.com/". A naive user would believe that they need to write the entire "https://www.facebook.com/" instead of just the short version to navigate to facebook, which is too cumbersome for them so they just use google search to find it instead.
That is site-specific behavior. It depends on a redirect from http://facebook.com/ to https://www.facebook.com/ which may not exist for all sites. If the user wants to end up at the proper page without risking an insecure redirect over HTTP then they do need to type the entire URL.
I would support making HTTPS the default when a URL fragment is entered without an explicit scheme, and site operators should just drop the obsolete www domain name prefix. However, the browser should not be masking the full name of the site you're visiting. At the very least it should verify that www.example.com and example.com resolve to the same IP address(es) and use the same TLS certificate before presenting them as equivalent, though that still doesn't guarantee that they have the same content.
You're right, and my experience was clouding my judgment. It's a hostile decision to me only because I know and care about the difference between a domain and a subdomain.
I thought I'd be able to find a counter-example where the two forms of URL get handled differently - perhaps by redirecting the bare URL to "m" but still serving the desktop version on "www" - but a quick survey of whatever sites happen to be open in my browser right now says nobody does that.
From a tech literacy standpoint I still dislike the obfuscation, but /shrug.
You can't make something easier to understand by hiding crucial pieces of it. Google's efforts to slowly gut the URL aren't meant to make it easier for people to understand what URL is; they're meant to make addresses look as similar as possible to search queries, so that ultimately they can drop user-visible addresses altogether and route everything through search.
You can disable this behaviour, and its all right there when you go to click on it. Presume you mean on mac btw, cos I using safari on ios and I can see the full address right in front of me. No protocol spec but a lock icon at least
The way Safari does it is a lot nicer UX. In Chrome, the URL scheme is always hidden, but if you press copy it magically copies the full URL to the clipboard – as opposed to the actual displayed text. In Safari, the URL scheme becomes visible once you click on the URL bar, so what you see is what you get.
That said, I dislike the default behavior of hiding the entire path until you click, as opposed to just the URL scheme.
Don't let it grate on you, just use Firefox. The only way you're going to influence their development direction from the outside is to choose a non-Chrome based alternative.
In the 90s people used to bemoan Microsoft using their OS monopoly and force IE6 on everyone
Many tech people don't seem to care about Google and the Chrome monopoly
Maybe it was the same in the 90s, but the only tech people I knew were on usenet and slashdot, and thus it only seemed everyone was against Microsoft's behaviour.
I think the reason they don't care is because if they don't have to work with it (their destandardization of HTML I believe) they can trivially find countless alternatives themselves. They aren't restricted.
Please change my mind.
Besides the 1% privacy gain and some more free RAM when using firefox over chrome. Which are the key/killer arguments for firefox, NOW?
That chrome can become shit in the future is not to be discussed...
While it's better now in many ways, some of which you've already dismissed, the future matters now because our choices now effect our options tomorrow.
In a way, with your browser choice you're effectively voting for the internet you want in the future.
If that's Chrome and, despite the practical benefits of using Firefox, you trust Google above Mozilla to work in your best interests, then use Chrome.
- Better fingerprinting resistance (AFAIK Firefox is the only browser currently uplifting Tor features).
- Tab containers. They're still reliant on extensions, but get past that and they're incredibly handy, even if you don't care about privacy at all. Tab containers allow you to have multiple different sessions running at the same time without switching profiles, so you can be logged into sites with multiple accounts at the same time. If you've ever used private browsing just so you can log into something twice, tab containers let you do that faster and more flexibly.
- Built-in screenshots. Yes, your OS already has this, but Firefox's screenshot tool is integrated into the DOM, so it lets you select a DOM container to screenshot, or screenshot the entire page, even if it's not all visible. It's a tiny feature that I regularly use, and I appreciate not needing to install another extension to get it.
- Better SVG performance. There are some parts of the browser rendering process that Firefox just does better than Chrome. There are some things it does worse, but my (subjective) experience has been Firefox is usually faster than Chrome at non-JS heavy tasks like browser repaints. Chrome is investing heavily into JS right now, Firefox is investing heavily into repaint/CSS performance.
- Keeping with that theme, Firefox dev tools have a bit of an edge when it comes to HTML/CSS editing (better rulers, font selection, Flexbox/Grid editing, stuff like that). Chrome dev tools are still better if you're doing heavy Javascript editing, but I prefer to debug CSS and do page layout in Firefox.
- Better defaults on small features like autoplay-blocking. Chrome allows videos on inter-site navigation to autoplay, Firefox doesn't. Chrome uses a behavioral algorithm to whitelist sites, Firefox doesn't. Just more sensible defaults in general. Chrome suffers from the same problem as a lot of Google products, where they're mostly sensible, but you'll occasionally run into really quirky design decisions that just seem like no one thought them through enough.
- I know you're not looking at the future, but it is almost certain that the Chrome Manifest V3 changes are going to go in for extensions in the near-ish future, and that happens Firefox will also be the best platform for Adblockers. I'm including that just because it's a relatively certain change that isn't very far off and that will have a very obvious effect on day-to-day browsing experience for a lot of people.
- Speaking of adblocking, if you're on Android, Firefox supports all of its desktop extensions, including ad blockers, which IMO is a killer feature -- extensions like UMatrix will save you a ton of mobile data. And if you're already using Firefox on Android, you might as well use it on the desktop as well so you can maintain the same extension-list or sync bookmarks between your devices.
Hiding the scheme is a dangerous path, because users then have to depend on browser specific indicators of https, or other secure schemes.
I think it's a bad idea for that reason, and for another higher level reason. As a developer and sys admin before that I've always tried to assume my users are smart people and try to educate them about what they might not know, instead of assuming they are dumb and attempting to PICNIC-proof software/systems. I think hiding the scheme indicates an assumption that users are dumb and need to be told what to do. I've seen reporting on other indicators of this from Google over the last decade (e.g., Google Reader's sunset, Google+, working with China's censors, and others).
Why must we continue to insist on dumbing down our presentation of technology. Instead we must insist that users rise to the occasion.
We are all inextricably linked to technology and yet I wonder if the newest users even know what a file extension is; because we hide that by default for some reason now. How many people have been the victims of phishing because an alias is more prominently displayed than the actual domain which the email originated from?
I'm ready for a return to function over form. Do not hide information from me. If I suspect that your product development is driven by the lowest common denominator then I will look for alternatives.
http and https are distinct and as such cannot be hidden; you could replace them with icons (barf) or ports (unlikely).
www.site.com is a subdomain and is again distinct from site.com with the 'www.' omitted. Just because they tend to resolve to the same server does not mean that they must.
Let's just call PI 3 because those decimals are an eyesore.
About 15 years ago we, the techies, killed the IE6 super monopoly (they basically had 100% browser market share). We installed firefox on any device we could get our hands on.
That was my first instinct as well but, you know, at some point if you don't know what it means and you aren't going to bother to find out, extra information is just noise. Green locks and red slashes are probably enough.
That's bullshit IMO, people learn all sorts of complicated stuff in video games through embedded tutorials. There's no reason a browser couldn't do the same thing.
Sigh. Many new comers to web development learn by observing, this will take that away. Www and https handling requires explicit configuration. Unless the intent is to not let people type in www or http, but what happens when they do and it’s not handled correctly?
I'll play devils advocate. I'm fine with this being default assuming there's an option for advanced users to disable it. 95% of Chrome users will not notice or care about this change, and in fact will make things simpler for them in the long run.
Also, seriously people, ditch www. subdomains already. It's not 2002 anymore. I cringe so hard when I see that crap in print ads and billboards.
I don't think so. Let's say you have one of the new TLDs for your site, say "new-york.florist", a nice domain. It's my belief that writing "new-work.florist" on your ads, business cards or merchandise will not let regular users know that it's a domain name. Writing www.new-york.florist will help indicate that this is your website.
Adding all the new TLDs have made the www prefix more useful than ever.
Doing so often requires non-standard DNS hacks such as CNAME flattening or ANAME/ALIAS records to achieve the aesthetic preference of a bare domain name. The www subdomain is still an obvious choice for anyone that's still running their own DNS, until the domain root aliasing gets a standard implementation.
To do a bunch of new things browser vendors want, they need to put more stuff into DNS. For example eSNI needs keys in DNS, and HTTP/3 (HTTP over QUIC, which is the new encrypted transport protocol) wants a way to discover HTTP/3 availability before connecting with TLS.
So at IETF 105 last week there was stuff pinging around about the idea of a single new record, perhaps just for HTTP or perhaps more, that wraps all this stuff up, with all the SRV features too.
The DPRIVE work is having the effect of reducing the pressure to not use DNS, because crappy DNS servers break all the time when you add new things, but DPRIVE servers are operated by people who actually know what they're doing so that isn't a risk, and eSNI in particular makes no sense without DPRIVE, so why not.
URLs are one of the few things the modern web has retained that has allowed it to be truly interoperable across different devices, platforms, and walled gardens. If we loose the URL, we loose the web. I don't like this at all.
A lot of the comments here are about Google's motivations for doing this, and it seems the consensus is they want to hide URLs so that users don't notice as Google swallows more of the internet.
A couple question for those with this point of view:
1) Do you think users notice when they are using AMP articles today, even though the URL has not been hidden yet?
2) Do you think it would actually matter if they did notice a Google-hosted URL? Would they boycott Google or change their behavior somehow?
3) Do you think Apple has the same motivations as Google? Safari has been hiding www, scheme, and path for a while now. Do you think they have any valid reason for having done this?
4) What do you say to Google's stated reasons for making this change (simplicity and security)?
I think the point is that Google wants fuzzy boundaries between things that were never fuzzy to begin with, until they started confusing people - whether they intended to or not.
If you asked any person- How do go to your favorite website before the omnibox, they had a clear set of steps that worked. If you asked them, how do you search for recipes they had a clear set of steps for that too.
Google's omnibox/(aka keyword logger) made that boundary fuzzy and made it confusing for people. Earlier, people very quickly learned to recognize what is and isn't a valid url, because an invalid URL simply didn't work - this was a very very useful skill/knowledge to have. And every-time you had a typo you noticed it immediately. But then Google insisted on merging the URL and search bar and started correcting their typos and mistakes and people started losing this knowledge/skill. So now.. a lot of non technical folks I know type the url in google's search box in weird ways .. like "wwwfacebook" or "facebook .com" or other typos. So now, nobody actually knows what a valid URL is. That makes them easy targets for phishing and scamming. I'm not blaming this entirely on Google, but they did play a part. Google doesn't seem to realize how they're fucking up things even if that is not their intent.
This hurts younger generations. They will become more and more oblivious to how things work. They will be forced to "trust" someone that knows better instead of having the knowledge to decide themselves.
For those that become engineers, it will take even longer to untrain all the handholding they've had to endure.
If the web is apparently so damn complicated maybe we should rebuild it to be "simpler" instead of hiding it how it works. I'd prefer to leave it exactly how it is until there is a clear reason to revisit the design.
Reads like folks at Chrome are just making work to have work because the senior leadership has no vision. Needs to be addressed before they burn through the goodwill they’ve built up.
By all means, hide the truth from the users and flat-out lie to them. After all, what's it of their concern which site they are browsing? The main thing is they see the ads.
Good reason to de-Google my existence. When an evil thing is actively trying to destroy the Web in a way even Microsoft didn't, it's time for that evil to get destroyed.
So instead of fixing their tab interface which is terrible if you have hundreds of tabs they rather choose to fix something that wasn't broken in the first place, gj chrome team
Remove protocol? Sure an arguement could be made that showing it is redundant giving the lock symbol appears when HTTPS is enabled. But removing sub domains like m. and www. isn't really cool.
I already hate the fact that chrome on mobile totally removes the URL when you focus the navbar, sucks if you want to go to some other subreddit etc.
www must go, it's a remnant of the past, the sooner the better.
protocol is not a detail that should be ever exposed to the end user, at least not in text form. visual cues "you're safe" or "your connection isn't secure" are much more effective.
Post addresses should go away. "You're at home" or "you are away" displayed on your smartphone lock screen should do. For the rest, there's Uber. – I'm not so sure that this is how real life works. Understanding addresses (post or electronic) is an important part of liability and a sound mind. Also, how do you identify authority, if you do not know about uniforms or IDs?
a) you're confusing addressing something with presenting the address; b) you're completely (possibly intentionally) ignoring the context, which is all about specifically "www" part, the trivial non informative part that is just a historical artifact.
If in doubt about the historical context of the "www" subdomains: This was, when dedicated services were run on dedicated machines (or rather, dedicated network interfaces). "www" was just a default appellation (much like a well known address on particular services) for a server running W3 services. I don't see, why there isn't a use case for dedicated service domains anymore. (Not everything is WWW.) There's no magic to this and has never been. (The magic is in the port number as encoded in the protocol portion of a URI.)
Regarding the context of the example, all this is about navigating and about communicating how to navigate. Real live addresses tend to be much more messy, but they are bearable and users have been able to operate them for what is now several centuries.
it's just like your opinion man. not showing redundant information is good. if you disagree where's your rant about browsers lying about port 80 by not showing it?
if your point is that www.name.com can be different site from name.com - i disagree even more, this is a case of incompetent developers at name.com so redirect your rage accordingly :)
> a thing for websites to do
they'll do it eventually, doesn't mean browsers shouldn't be getting rid of redundancy.
> it's just like your opinion man. not showing redundant information is good.
The host name is not redundant information.
> if you disagree where's your rant about browsers lying about port 80 by not showing it?
Why should I write idiotic rants?
> if your point is that www.name.com can be different site from name.com - i disagree even more, this is a case of incompetent developers at name.com so redirect your rage accordingly :)
Well, great that you disagree. Now, please do the work of changing the relevant standards instead of making yourself look like an idiot by calling people who read and follow standards "incompetent developers".
> they'll do it eventually, doesn't mean browsers shouldn't be getting rid of redundancy.
>> if you disagree where's your rant about browsers lying about port 80 by not showing it?
> Why should I write idiotic rants?
You are not aware that all browsers currently hide port numbers in the URI bar if they are 80 or 443? For example, https://www.google.com/ is actually https://www.google.com:443/, but the browsers choose to hide this part. Wouldn't it be better if you actually saw which port the browser connects to? That way you can write https://www.google.com:13000 for example to try to connect to another port, but now you believe that those addresses doesn't exist. Removing these two standard ports from the address bar is therefore exactly the same as removing other standards parts, like https:// or www.
> You are not aware that all browsers currently hide port numbers in the URI bar if they are 80 or 443?
Which is why hiding the protocol is stupid. How do you know which port is being used if you don't know the protocol. You are actually arguing against your own point.
I know it is port 80 or 443 based on the protocol - it's deterministic. I don't know that www or m maps to the same place as the apex domain name.
> How do you know which port is being used if you don't know the protocol
by using the browser you opt into vieweing the web, which historically is used via http/https on ports 80/443, and just as historically, majority of web sites have www subdomain, which these days makes no sense to expose to users just like ports and protocols.
What a "majority of web sites" are doing is also completely irrelevant. If the standard says that it's a thing that you can do, and the standard says which behaviour the other party has to implement, and you are the only person on the planet to do this thing, the other party is still at fault if they implemented something else.
The whole idea of writing standards is completely useless if you can not rely on what a standard says. If something conforms to a standard, that guarantees that if you also conform to the standard, you will be able to interoperate with it. That is the whole point of writing standards. If either party does not conform to the standard, there is no value to the standard in the first place. If you are supposed to know what some random people consider sane, or the implementation details of everything that you want to interoperate with, and build your stuff for that, then you don't need a standard. The whole reason for writing standards is to eliminate the gigantic overhead and friction of that approach and to enable everyone to instead read only one document to ensure interoperability. All of that becomes worthless when you implement what you think is sane over what the standard says.
google.com could even be QUIC which could be any UDP port but in most cases is 443. That information isn't displayed either. Not to mention the difference of http1, http1.1, http2, quic, shouldn't the all be displayed if we really care about displaying the protocol?
The protocol is irrelevant. The identity is what this is about. URIs identify resources, and conflating different URIs is violating the relevant standards.
It's hiding though, not conflating, it isn't violating any standard. Not to mention, how is a protocol irrelevant but a "www." subdomain isn't? Both are equally useless for most users when browsing.
> It's hiding though, not conflating, it isn't violating any standard.
By doing what?
> Not to mention, how is a protocol irrelevant but a "www." subdomain isn't?
Because the "www." hostname is part of the identity, the protocol is not (if we are talking about HTTP/1.x vs. HTTP/2 or QUIC--HTTP vs. HTTPS is part of the identity, of course).
> Both are equally useless for most users when browsing.
That's irrelevant. It still is part of the identity of the document.
> You are not aware that all browsers currently hide port numbers in the URI bar if they are 80 or 443?
No, I am not, for the simple reason that that is not the case. Browsers hide port 80 only for HTTP and port 443 only for HTTPS, because those are the well-known ports for those protocols, and the specifications of URIs and those protocols define that no port in the URI is equivalent to the well-known port, and what the well-known ports are.
No, the browsers simply follow the relevant standards, as you can see from the links above.
> Wouldn't it be better if you actually saw which port the browser connects to?
No, there is no additional information in displaying the well-known port, as per the standards cited above, so it would be a completely useless waste of space.
> Removing these two standard ports from the address bar is therefore exactly the same as removing other standards parts, like https:// or www.
What do you mean by "standards parts"? Also, could you please cite the relevant standard that specifies that http[s]://www.domain/ is equivalent to http[s]://domain/?
> Browsers hide port 80 only for HTTP and port 443 only for HTTPS, because those are the well-known ports for those protocols
how is that different from "browsers only hide www part of domain name because this is a well known historical artifact and default behavior of any sane website should be to either redirect one name to another or serve the same content on both"
> how is that different from "browsers only hide www part of domain name because this is a well known historical artifact and default behavior of any sane website should be to either redirect one name to another or serve the same content on both"
That one of those is part of the relevant standards, the other is not. What your personal opinion is as to what a "sane" anything should do is just completely irrelevant for this discussion. You build reliably interoperable systems by implementing what the standard says, not by implementing what you would prefer the standard to say, because (a) if everyone just implements what they think is sane, that's guaranteed to fail at interoperating because everyone else will consider something else sane and (b) the process for building the consensus as to how to do things is the standards writing process, so if you think a standard is insane, you have to participate in the standards writing process to change the consensus on how to do things sanely, which then will be reflected in a new revision of the standard.
Whether something is well known is also completely irrelevant. "Well-known port" is simply a fixed term for "ports that have been standardized as the default port for a protocol", and the only relevant fact here is that the behaviour of not displaying the port number is standardized.
> you're being unnecessarily and angrily pedant.
No, your kind of reasoning is precisely how a ton of interoperability problems arise, costing humanity probably billions of dollars to work around each year.
There is no standard that says "how a browser ought to display identity of a site", at least none that I know of, but that doesn't mean that there is no standard that is relevant to how a browser should display URIs.
There is also no standard that says "how an email client should display email addresses". Now, suppose that some email client decided to always replace the TLD in any email addresses it displayed with ".com". Is that conforming to standards because there is no standard that says "how an email client should display email addresses"? No, obviously, it's not, because the address "foo@example.com" is not guaranteed by any standard to be equivalent to "foo@example.org", say, so this transformation causes an email address to be displayed that, according to the relevant standards, is not guaranteed to be semantically equivalent (mind you: that does not mean that they can never be equivalent--there is just no guarantee that they are/there is no responsibility for the owner of those addresses to make sure that they deliver to the same mailbox). Which is in contrast to, say, displaying "foo@ExaMple.org" as "foo@example.org"--those are guaranteed to be equivalent, so substituting one for the other is acceptable.
The exact same thing applies here for URIs. The URI and HTTP(S) standards specify which URIs are equivalent, and if your software displays one URI as a different URI that is not guaranteed to be equivalent, then that is in conflict with that standard.
in context of browsing web sites in the browser, www is absolutely redundant information.
> do the work of changing the relevant standards
since you're so much into standards, there is no standard for how browsers ought to display identity of the site provider that is being browsed. i also don't know of a standard that mandates anything specific about "www" subdomain. and i completely stand by the statement that serving different content on "name.com" and "www.name.com" is utter incompetency.
> in context of browsing web sites in the browser, www is absolutely redundant information.
Please cite the relevant standard that says so.
> since you're so much into standards, there is no standard for how browsers ought to display identity of the site provider that is being browsed.
Which is just willfully not understanding the problem? There is a standard for which URIs are equivalent to one another, and I linked you to it. Browsers displaying one URI as a different URI that according to the relevant standards is not guaranteed to be equivalent is obviously in conflict with that standard.
> and i completely stand by the statement that serving different content on "name.com" and "www.name.com" is utter incompetency.
Great, I understand that that is your opinion. But the standardized consensus differs from your opinion, and the standardized consensus has precedence over your opinion. If you think the standardized consensus is insane, then work on changing the standardized consensus rather than promoting breaking interoperability.
> then why did you bring it up?
I didn't. Different host names are not redundant, that claim is simply your invention because you don't like that they are not redundant.
Seriously speaking: If browsers are simplifying the URI scheme for the alleged benefit of users, how do we expect these users to know anything about addresses? Isn't this rather undermining security than enhancing it? Highlighting significant parts may be preferable to hiding those deemed insignificant. Moreover, regarding https, I personally prefer positive affirmation over lack of warning.
For me, this worked best for desktop browsers with the padlock icon (and, before this, the key icon) shown together with a display of link targets in a status bar as a separate, reserved area. (While allowing pages to overwrite `window.status` was certainly not a good idea.) A consistent display of the authority issuing the certificate of the current page in a status bar like this may be also nice. I'm not convinced that less but more opaque information is the way to go.
Dedicating 20 vertical pixels of virtual real estate to security relevant information may be worth it. It may be also easier to parse than an overloaded omnnibox/location/search/navigation/security/menu bar. Cutting down any information which is displayed too densely right from the beginning won't help the issue. How many bits of information are there in this "everything bar"? Yes, there's still a bit of grouping left, mainly by spacing, but color is mostly gone as a signal in order to make the information density bearable. So users will be applying quite an amount of selectivity when parsing this display, by this inevitably missing relevant information. (That this densely combined display is rather homogenous both for esthetics and acceptance just aggravates the need for selective parsing, which is likely to become a habit.) "We'll pre-filter this for you" isn't addressing the problem, it's rather "living with the outcome".
Edit: A legitimate reason for redacting the host name are extensive names, crafted to exceed the space available in the location display in order to deceive users regarding the identity of the host. Here, abbreviating by an ellipsis (compare text-overflow: ellipsis) in order to fully display the domain may be a way to go.
--
P.S.: What's the general lesson taught by such redactions by the browser vendor? That it is OK to ignore these things, as they are truly irrelevant? (Must be without significance, since Google told me so?)