That seems really ass-backward, and not just because I use the search feature a lot.
If I'm reading that right, they're deprecating support for discoverable browser-independant markup for searches; and replacing it with the requirement that each site actively develop (and maintain!) a software plugin for every browser their users might want to use.
The whole point of a "user agent" was to go out and do things for me on the web; and the idealistic goal was that each person could choose an agent suited for them, which then had tools to programmatically discover and interact with the web in a common manner (reducing engineering load on the webdevs).
And I don't want to try and use a separate search tool (with new flashing graphics and ads!!!!) for every site I go to... I want a single search tool, like FF offers right now. (Aside: not to mention chrome's "auto-discovery of opensearch when you tab after typing a domain" is actually MORE useful than FF's manual mode!).
Taking a step back and removing support for a declarative api seems to me like the really wrong direction for an open web. Instead of sites supporting a single declarative browser-independent markup; they now have to deal with a long tail of (2-3 + who knows how many) browsers; and users with a niche browser have to spend effort convincing every site to support their browser.
Why not try to improve the opensearch markup instead?
Followup - I'd actually been using chrome a bit more heavily; and was wanting to use FF more, just to support open standards. One of the main things I was missing was being able to type in google docs domain, tab, and type a document. I was planning to research how to make something similar work in FF, and now I know how, and that they're removing it :(
There is ancient feature of regular bookmarks to serve as "search keywords" [1]: just give it a some keyword and use `%s` in place of URL you want to substitute with value encoded as URI fragment (IIRC, or `%S` to be used more verbatim). In Firefox it is directly in "New bookmark" form; in Chrome it is dug somewhere in "Search engines" corner of settings.
For example setting keyword `t` for uri `data:text/plain,%S` and entering `t foo` into location bar will navigate you to `data:text/plain,foo`, i.e. "make document". If Google Docs have GET endpoint for creating documents, it should work. For searching you can apparenly use `https://docs.google.com/document/?q=%s`.
The parent comment was referring to how Chrome automatically adds search engines. You only have to "dig into" the settings menu if you want to change the keyword or add a custom search.
Even if you're adding it manually, I think the list of search engines is a more intuitive place to put such a feature than "bookmark keywords".
Compared to similar features in Chrome or DDG's bangs, Firefox's bookmark keywords seem less discoverable to me.
The exact same trick works with custom search engines in Chrome as well.
They're equally powerful. Automatically adding search engines means users don't have to do so by hand, but you can still manually create a "search engine" too (e.g. "https://xkcd.com/%s/" with the keyword "xkcd").
In Firefox I keep a folder with bookmarks that have keywords, but I would prefer the UI in Firefox's search engine settings (the bookmark manager doesn't have a keywords column).
The problem is few will discover Firefox's bookmark keywords unless they're told about the feature, and manually create such bookmarks, while Chrome automatically creates keywords for search engines and prompts users to try them out.
I completely agree that it's a pity such nice feature isn't known better among wide audience and yes, Firefox bookmarks management ("Library") UI leaves much to be desired. [2]
Just one reminder: in Firefox there is "Add a Keyword for this search..." command on any (form) input field that triggers keyword bookmark creation wizard [1], so what Chrome does automagically by visiting page with form (or using the form once?) you can do quite easily in Firefox as well, but you must find the input field, shift+f10 or click few times and pick keyword.
Also, using same keyword for different URL will (at this moment) silently "transfer" the keyword to new URL, with no warning about
[1] yet again, this wizard obscures resulting bookmarked URL with relevant `%s` part, so regular user cannot find out how this thing works. (I'm sad how hard recent browsers tend to hide whole concept of URL from users, in general. I understand it, but it's sad.)
[2] I had to `select moz_keywords.keyword, moz_places.url, moz_places.title from moz_keywords inner join moz_places on moz_places.id == moz_keywords.place_id order by keyword;` last time I wanted to see all my keywords. (And I'm trying to keep them in a single folder as well.)
> I completely agree that it's a pity such nice feature isn't known better among wide audience and yes, Firefox bookmarks management ("Library") UI leaves much to be desired. [2]
My main worry is that I mainly use tags to sort my bookmarks before using folders, and even after many years they (tags) aren't showing up on mobile. I'm afraid they'll eventually pull tags support out and become way less useful.
Tags are IMO a better sorting system than folders.
I've ended up using neither tags nor "topic" folders (besides few folders in toolbar): for retrieval I rely on titles (names) and URLs alone. Every time I bookmark something I evaluate its title and URL, look for missing terms my future self could use, and then either reword the title or add raw "tags" in the end. Extraordinary bookmarks I supply with several * proportionally its extraordinarity (and make sure names contains no such sequence).
It serves me well: using native bookmarks search accelerator [1] in url bar * , keywords and 'rating system' allows me to for example quickly pull "best personal blogs of people writing about javascript":
* *** javascript guru
I understand your worries, IIRC there used to be bookmark "description" field that was just recently removed (I've used it maybe twice) and if tags are globally used like this description was, your concern could be quite relevant. If need arises, perhaps some sqlite-fu could transfer tags to titles, at least.
I have noticed that menu entry for years, and tried using it, but I had no idea how (and I guess it wasn't big enough of an issue for me to look it up). Thank you for explaining.
I for one am rather curious about many recent Mozilla moves. They seem to be away from decentralized and distributed models, towards centralized ones.
You ask why not improve opensearch markup, just like I have asked why not improve RSS. I am quite convinced that the RSS model has a lot of untapped potential for a distributed internet, but it also thwarts central control, surveillance, tracking etc.
> You may disagree that it's the real Bitcoin, but it's undoubtedly open to interpretation
It really isn't open to interpretation. And that is why people have been disparagingly calling it a different name; because the insistence that it's "the real Bitcoin" lessens it's credibility.
Why is Bitcoin Gold not the real Bitcoin? It's got a more ASIC-resistant proof of work, that's surely also in line with Satoshi's original vision? IMO, that's a much better reason, if adherence to the original whitepaper is what we're measuring by, since it reduces mining centralization. (Heck, if BCH adopted an ASCI-resistant PoW, I'll view in a new light!).
But you have to let things evolve. Software is rarely perfect out of the gate. Massive p2p software in particular is such a very new thing, there's going to be all kinds of bugs and dead ends. There may be ways to optimize and polish those dead ends to make them the best they can be, but it won't change the fundamentals of them.
And there's tons of levers and settings inside bitcoin. I don't think they're all set right (BCH's new difficulty algorithm seems like an interesting idea, in fact). But one thing that seems to be clear across the biggest coins... a worldwide distributed blockchain ledger just takes too darn long to handle p2p payments worldwide at a reasonable KiB/s. Even ETH, as fast as it is, was brought down by cryptokitties.
The coin that succeeds is going to have to solve that in some fashion. And running all the world's transactions through every node, upping the blocksize as needed, isn't going to solve the problem.
1) No counterparty risk. Everyone can get hacked, but coinbase being hacked shouldn't every, under any circumstances result in me loosing any bitcoin they hold on my behalf.
This point may prove to be very sticky.
It might be solvable with smart contracts... consider ethereum's https://www.etherdelta.com, which operates as an auditable smart contract. Your money literally can't be stolen without your private key; even when it's held in escrow by the contract. (Of course, a bug in the contract code could let someone get through; but that's all out in the open, so easier to check for).
I could see something like that being done, with the individual trade data being signed, but held off-chain until the user wanted to make a transfer (then ALL signed movements related to them could be consolidated on the chain).
---
Short of that, a centralized exchange is gonna run into the fundamental issue that cryptocurrencies have value because of a few properties; Chief among them being that they can be moved atomically, and with non-repudiation. Those two alone make theft a LOT easier to get away with once achieved.
Being able to walk back such thefts would require general consensus of the network. For most coins, the majority of participants would see that as a fatal weakening of the protocol's guarantees, and drop out. The highest profile rollback I can think of was ETH's june 2017 DAO incident. Even that proved rather controversial: though the market voted in favor in the end, it was a messy out-of-band solution, which may not work again.
---
Conversely, some lesser coins such as NAV actually formalized "consensus voting" on such out-of-band issues as part of their network protocol. I could potentially see that being used as a final band-aid, though "tyranny of the majority" issues would seriously need solving. NAV uses proof-of-stake, though, which certainly helps keep their voting in the "put your money where your mouth is" territory.
All in all, I think there are potentially some trustless and "semi-trustless consensus" methods that could solve this issue in revolutionary ways. But none of them are tested... Coinbase may just have to get themselves massively insured using traditional methods, at least for a few more years as the tech expands.
>> 1) No counterparty risk. Everyone can get hacked, but coinbase being hacked shouldn't every, under any circumstances result in me loosing any bitcoin they hold on my behalf.
> This point may prove to be very sticky. It might be solvable with smart contracts...
Shouldn't be that hard, even using just the features available in Bitcoin today. You'd need a 2-of-2 multisignature wallet for each account holder, with one key held by Coinbase and the other by the user. When the user wants to spend some coins, he or she signs a transaction which Coinbase prepares and counter-signs; neither the user nor Coinbase—or an attacker with access to Coinbase's keys—can transfer the funds unilaterally.
That's pretty much exactly how the EtherDelta smart contract works. The contract can only move coins you've signed an order for, and only when it's properly paired with a matching counter-order.
I think the mainly catch would be scaling the speed (etherdelta is limited to the ETH network block rate). I don't know too much about lightning's details, but given that they've demonstrated atomic cross-chain swaps, I bet it's set up to handle something exactly like what's needed.
Which is quite exciting... I just always moderate my enthusiasm because for all complex software projects, the devil (and 80% of the work) is in the details :)
That is one reason why I'm actually kinda wild about distributed exchange like etherdelta (as proof of concept, at least). The security model is such that it literally doesn't exist in any country, anywhere in the world; doesn't have servers that can be hacked, etc.
(Admittedly the website is housed somewhere, but it's little more than a GUI shell for signing procedure calls, you could run it locally or interact directly).
I think the big issue hindering that model is that they need something allowing users to pipe into traditional currencies. So far all the solutions to that have been IOU tokens like USDT tethers; which end up generating governance headaches of their own, as their value desyncs from their base currency due to arbitrage & supply issues.
Oh, I definitely think they can. Didn't mean to imply otherwise!
I do think the surface area for attack is much more limited, since the code is innately public (unlike centralized trading houses).
That (in theory) should make them more secure, particularly if considered in terms of the transparency provided. I also think theorem proving, and languages amenable to it, can greatly reduce the surface area even further; reducing it down to a matter of reviewing what assertions the theorem prover was handed, along with trusting the VM's implementation itself. (The later is also helped by having multiple independant implementations).
But I don't think the tech is there yet. The languages currently being used aren't the greatest for theorem proving; no one's actually done much of that in practice anyways; there aren't "best practices" for upgrading your smart contract when a bug is found; and there's ALWAYS an assertion someone forgot to add to the test suite.
But I do think smart contracts could remove "server compromise" and "unauditable code" from the list of main dangers of an exchange, which does seem quite useful in the long term (once the ecosystem fleshes out a bit).
Followup just to clarify: The reason smart contracts drastically mitigate "server compromise" is that compromising / altering the operation of the VM (and not merely exploiting a bug) requires a 51% attack on the entire network. That should generally be a MUCH more expensive undertaking than the value of any given smart contract (under the assumption that the network will be valued at least 2x more than any of it's participating apps).
There's no insurance company that would write a blanked insurance policy for BTC. It is the other side of the "We are BTC! We are special! Rules of finance do not apply to us!" coin.
If they have the money to buy hashing hardware that's competitive, staking that money has a lower barrier to entry... You don't need space, hardware, or technical experience to assemble the rig. Just $.
Rich people in a proof-of-stake system will stake their money because doing so gets them more free money for no cost. Rich people in a proof-of-work system have to choose between spending their money on hashing hardware and spending it on other things.
The thing about rich people is that they don't spend most of their money. Instead, they keep it locked up in relatively illiquid investments that yield big returns over long terms. The poor are stuck with riskier short term investments, if they can afford to invest at all.
That doesn't fit. Per bitinfocharts.com the coin with the highest txn rate is Ethereum, followed closely by Bitcoin. The others are far behind. I'd say it's seeing a lot of use.
Ethereum's Casper uses the solution that while validator A could double-vote for multiple blocks, validator B could take those multiple vote messages, and submit them as cryptographic proof of A's dishonesty. By submitting such proof, the protocol would allow slashing A's stake, and giving some portion to B as a reward for catching them. Thus validators are strongly incentivized to only vote for one block at the end of the chain.
Not that I'm totally sold on it, or grok it fully, but from my rough understanding the very highest level (for Ethereum's Casper)...
The idea is that a double-spend attempt by a cartel of validators could be included in a new block as cryptographic proof to used take away their stake entirely ("slashing" it). Where a BTC miner would merely lose the cost of an attempted double spend block, and could keep mining more bad blocks; removing their stake completely is like burning down their rig.
In order to prevent that punishment, they'd have to control >2/3 of the staked coins on the network. Which means as long as total amount staked grows in value proportional to the network, this will be incredibly expensive. By adding rewards for staking, this further incentives long-term holders to stake part of their holdings, to secure the network.
"ascii" the codec does exist under python. It's strictly defined as byte values 0-127, anything in the 128-255 range causes a decoding error...
>>> b"abc\xf0".decode("ascii")
UnicodeDecodeError: 'ascii' codec can't decode byte 0xf0 in position 3
Thus, moving the default from "ascii" to an ASCII-superset should have no decoding issues for previously valid files, since those bytes were never valid to start with.
As the article points out, in a directed attack it just has to be an outage which affects the target server: e.g. compromise a firewall, lan DNS, or managed switch in front of the server and "block" duo.
If I'm reading that right, they're deprecating support for discoverable browser-independant markup for searches; and replacing it with the requirement that each site actively develop (and maintain!) a software plugin for every browser their users might want to use.
The whole point of a "user agent" was to go out and do things for me on the web; and the idealistic goal was that each person could choose an agent suited for them, which then had tools to programmatically discover and interact with the web in a common manner (reducing engineering load on the webdevs).
And I don't want to try and use a separate search tool (with new flashing graphics and ads!!!!) for every site I go to... I want a single search tool, like FF offers right now. (Aside: not to mention chrome's "auto-discovery of opensearch when you tab after typing a domain" is actually MORE useful than FF's manual mode!).
Taking a step back and removing support for a declarative api seems to me like the really wrong direction for an open web. Instead of sites supporting a single declarative browser-independent markup; they now have to deal with a long tail of (2-3 + who knows how many) browsers; and users with a niche browser have to spend effort convincing every site to support their browser.
Why not try to improve the opensearch markup instead?