Total Cookie Protection, built into Firefox, makes sure that facebook.com can’t use cookies to track you across websites. It does this by partitioning data storage into one cookie jar per website, rather than using one big jar for all of facebook.com’s storage. With Enhanced Cookie Clearing, if you clear site data for comfypants.com, the entire cookie jar is emptied, including any data facebook.com set while embedded in comfypants.com.
This seems exactly right: now that we have partitioned cookies, cookie clearing should clear cookies for the whole partition.
It's ridiculous to suggest that this was somehow all nefariously intended by Google et al. How do you then explain that's what Firefox has done all the way up until now?
No -- it's just how cookies were meant to work from the start, the most obvious implementation before the privacy/security/tracking implications got worked out, which has taken many years.
And Google's working to make similar improvements to Chrome:
So not "insane" at all. To the contrary, it was entirely reasonable at the beginning, and now we see browsers reasonably addressing the problems that have arisen.
> How do you then explain that's what Firefox has done all the way up until now?
The fact that for a long, long time the vast majority of Firefox's income has come from search engine partnerships, a category google dominates?
Also: Firefox has been rather poor about user privacy. Integrating third party stuff that's difficult to remove, like Pocket, for example.
There was the whole "Looking Glass" debacle where they dropped in a Mr. Robot promotional plugin into Firefox completely silently.
When someone posted in bugzilla about it, the project manager for the plugin made the thread employee-only. It was then changed back to public briefly, before disappearing for good, reportedly being locked so even employees can't see it:
Ask yourself: "why is a bug files about a promotional plugin so secretive that not even employees can view it?"
BTW: Guess where that project manager used to work before she worked at Mozilla? Answer: an online advertising and analytics firm (according to her LinkedIn profile at the time.)
2) This is completely irrelevant to user privacy, because Pocket doesn't exfiltrate any data. The source code for the integration is open source, you can go look this up yourself.
Couldn't a smart person have figured out exactly how that cookie model could be abused like, within days of it existing? Was it really something that only got figured out with time?
In the early days, the internet was seen as a massively playfield-leveling and decentralizing force ("the net interprets censorship as damage and routes around it"), not a massively centralizing one (Facebook is the world's only newspaper).
In a model where everything is decentralized and leveled , no player is big enough to worry about.
A smart person could have figured it out, but it was extremely unlikely.
The economics sub-discipline of economic geography was being developed at about the same time as Eternal September.
The key insight (one of the key insights) from that research is that as the absolute cost of transport goes down, previously insignificant differences in cost become important. This leads to to the development of "hubs" - centralization.
(Here we're talking about information transport, and the cost being time per bit.)
But as you say, at the time the tech world could never have believed that centralization was the default expectation, nor designed things to compensate.
The classic text is Fujita, Krugman and Venables, MIT Press 1999, The Spatial Economy.
The internet observation is an adaptation of the original work on goods trade to other transport forms. I forget where I first read it--sorry! Maybe someone like Clay Shirky, but not the man himself.
Someone could, and people did. DoubleClick was founded in 1995 and was using cookies for tracking user interest across sites by 1997 (or earlier; hard to pin down). There was lots of discussion of this at the time:
Any web site that knows your identity and has cookie for you could set up procedures to exchange their data with the companies that buy advertising space from them, synchronizing the cookies they both have on your computer. This possibility means that once your identity becomes known to a single company listed in your cookies file, any of the others might know who you are every time you visit their sites. The result is that a web site about gardening that you never told your name could sell not only your name to mail-order companies, but also the fact that you spent a lot of time one Saturday night last June reading about how to fertilize roses. More disturbing scenarios along the same lines could be imagined. There are of course many convenient and legitimate uses for cookies, as Netscape explains. But because of the possibilities of misuse we recommend disabling cookies unless you really need them.https://web.archive.org/web/19970713104838/http://www.junkbu...
(Disclosure: I work in a part of Google that's descended in part from DoubleClick. Speaking only for myself.)
Thanks, that’s what I was thinking, that advertisers figured it out early on, and they aren’t smarter or dumber than the rest of the professional population, so this shouldn’t be some surprise that took years to work out.
(I personally remember thinking exactly that, that cookies allow universal tracking, as soon as I learned of the concept, but I don’t want to put too much stock into that belief because of the possibility of hindsight bias and misremembering.)
The entire internet was built on the assumption of good actors and until recently non-secure protocols & models were the default.
Only in the past decade has there been serious consideration for encryption and security on the internet.
Before Let's Encrypt was launched in 2014, HTTPS was the exception, rather than the norm. It was only in 2016 that greater than 50% of internet traffic was encrypted.
Secure DNS is still very much a work in progress.
BGP was built with the assumption of good actors, and doesn't include any security mechanisms.
Email still doesn't have any good options for E2E encryption.
The first cookie RFC, rfc2109 (1997), was even more conservative:
An origin server could create a Set-Cookie header to track the path of a user through the server. Users may object to this behavior as an intrusive accumulation of information, even if their identity is not evident. (Identity might become evident if a user subsequently fills out a form that contains identifying information.) This state management specification therefore requires that a user agent give the user control over such a possible intrusion... --https://datatracker.ietf.org/doc/html/rfc2109#section-7.1
Early versions of Internet Explorer used to follow this and prompt about cookie storage all the time, to everybody’s great annoyance. Eventually it defaulted to always allow.
Now with GDPR prompts we’ve come full circle, but instead get the UI of the web site instead of the user agent, allowing all kinds of dark patterns to be exploited and requiring re-prompts all the time for those of us who don’t allow the page to keep cookies in the agent.
> How do you then explain that's what Firefox has done all the way up until now?
Google is historically the largest financial contributor to Mozilla (paying for spot as default search engine) and thus has always had leverage on what they do with FF.
There were a few years there where Moz flexed on google by making Yahoo the default, but then switched back to Google last year. My guess is they had to show google they were willing to go elsewhere in order to regain some of their autonomy, which is why it's only in the last couple of years that FF has been willing to add default customer privacy features despite directly hurting FB/Google's ability to track users.
Advertising targets1 trust Google. There is no reason for them not to trust this company. Google has the privacy of its advertising targets as its highest priority.
Mozilla gets 90+% of it operating budget via a deal with Google, but Firefox developement is not influenced at all by Chrome. Totally independent.
Big Tech exists for users, not advertisers. Privacy must come first and money must come second. Thats why we have more privacy than ever and Google does not make much money. Government regulation is totally unnecessary. All incentives are aligned toward greater privacy.
Google will "build a more private web" for its advertising targets. Sorry advertisers. :(
Mozilla was doing this literally before Google existed. The origins of how cookies work predate Google as a company let alone as an advertiser platform. At the time Netscape was not beholden to an advertising company at all.
The previous poster is correct on their historical analysis. Your comment does not change the accuracy at all.
> It’s insane that this hasn’t been the default all along across all browsers
Historically cookies weren't partitioned by site. So if you went to clear the cookies for https://publisher.example, then the browser wouldn't know whether to also clear cookies for https://other.example.
(Cookies are still not partitioned by default in Firefox; it requires turning on Total Cookie Protection)
It isn't insane, it's a leftover from when you didn't have to assume that most sites you use are actively hostile to you.
If it didn't come with all the tracking and privacy implications, being able to see your friends' comments on a site first, use social widgets etc. is a feature.
This will also break some sites, some of which will never get fixed, so this is a hard change to make (but necessary at this point).
As others noted, I'm not sure there's a profit motive to blame here, but yeah it feels like browsers are constantly playing catch-up, indicated by ever-stronger words for the features, rather than switch to a better-engineered, more robust model -- reminds me of PHP's treadmill of "no-really-totes-secure-this-time-sql-call".
The world kept turning all these years before people got unreasonably paranoid about cookies and ad networks. I think it's all pointless theater. I wish Mozilla would focus more on browser customizability and other extension powers like we used to have with XUL and bringing the mobile browser up to speed instead. I couldn't care less about a Facebook tracking cookie.
I agree with the mobile browser (the only recent change they made, afaics, was to artificially disable most of the extensions and to make tab switching worse), however it is not just about ad networks that people are paranoid about. Tracking is pervasive and there are many players which know way too much about what users are doing on the net.
I don't even care if they track me - what I care about is that they track mostly everybody. Such power should not be underestimated.
Law enforcement is exempt from GDPR and other online privacy acts and we all know how much intelligence agencies know. The people who it matters if they track you are still tracking you. All that changed is it's harder to make money from ads and it's more expensive and dangerous to run your own web service.
Not true; a substantial part of the Schrems II decision was about how the GDPR applies not only to law enforcement, but also national security surveillance. See eg ‘European Essential Guarantees.’
Not how I read it. To me it says Facebook.com cookies set through site A are separate from those stored through site B. Even if you never clear, fb cookies would no longer be a single cookie to link these, but separate.
If you disallow third party cookies, then there is no use for this per website cookie jar. I've browsed the web like this for decades (since Opera 9, IIRC), and I had problems with at most 5 websites. YMMV, of course.
In my opinion, the simplest way to deal with cookies is to disallow third party, and to keep a white list of authorized websites. Cookies outside this white list should be deleted manually or automatically after a few hours. Extensions for this probably exist, but I've had bad experiences with extensions breaking or becoming intrusive, so I made my own where I hard coded the domains that I want to keep.
Do you have issues with online payments? Things where you are transferred to some banking page to enter some two factor authentication or something like that.
That is one of the main issues I have when I do things like that, online payments fail in subtle ways and you aren't sure if the payment goes through or not.
I'm using FF almost exactly the same way as you describe, and have found the "Forget Me Not" addon to be great. Not allowing 3rd party cookies at all via browser settings, then the addon deletes all cookies for a specific site after closing the tab. Having a whitelist with 5-6 sites where I keep cookies forever.
Not my field of expertise, but I think it's also possible to over-estimate the protection this provides.
By the time sites are incorporating Google's own JavaScript code, tracking cookies can be stored as the site itself. Only a single action (look how much data is handed to the site via the URL of a search result, for example) and this site-specific cookie is just part of wider tracking.
Not if this becomes the default in browsers with meaningful market share.
Also, blocking people who use privacy settings can be legally iffy. Many sites are already on thin ice, telling people to use their browser settings if they don't want tracking cookies. Forcing tracking like that sounds like a recipe to receive an expensive lesson in GDPR.
You are right, but in practice it's a catch-22 situation. If companies start blocking privacy-aware browsers, then people will not use them, and they will not get market share.
This would be even more better with a “Forget this site” button that could be added to the toolbar (if the user wishes to). Clicking on it would clear everything for the site and close the tab.
The nested menus to access it aren’t very convenient.
I do use CookieAutoDelete to handle this for closed tabs.
> What I really want is for all cookies to be deleted when the browser exits, except for the sites assigned to the containers I've created.
Then Cookie AutoDelete (CAD) is exact what you'd want. Firefox has a setting option that deletes all cookies except exception after the browser is closed. But in my opinion, that function is too limited. CAD filters domains on container level, while Firefox's doesn't. CAD also offers regex matching for domains, which is really useful. My favorite feature is greylisting everything in Default Firefox's container (using * regex for greylist).
CAD is compatible and best used with Firefox's Multi-Account Container.
What I really want is for all cookies to be deleted when the browser exits, except for the sites assigned to the containers I've created.
At least Firefox 90.x has a checkbox "Delete cookies and site data when Firefox is closed" in the privacy section, along with a button to "Manage exceptions..."
I would much rather this be when tab close rather than browser close.
I rarely close my browser except for updates, or when I'm spe cifcally taking advantage of delete on exit. However, delete on tab close would be dreamy.
I don't really want it to be an extension. I don't like the power given to extensions, so the less I use the better. Sorry decent extension devs, for me the bad guys have tarnished the trust to just not want to use any.
This is what I do, I have it clear cookies every 60 seconds, but it keeps my container cookies.
Just set the site to 'always open in ..' the specific container, and all is good.
No, there are two types of rule in Cookie AutoDelete (CAD) that you have to read through their documentation to understand their function: Greylist and Whitelist. You have assign these domains to Greylist (whose filter setting has to be pre-configured first, namely you decide what to be kept/removed in greylist rule).
I use in combination with Firefox containers feature Enable Automatic Cleaning of CAD, which deletes all cookies (as well domain related contents) except those in whitelist. That has saved me a lot of time in manual greylisting.
This version of Firefox is also the first major piece of software to be translated into Scots - a language spoken / understood by 1.3m people in Scotland.
My previous company worked on the translation and they told me they had fun trying to come up with suitable equivalents for technical words such as "minimise" and "maximise".
> Despite his well-meaning enthusiasm, AG admitted to making many mistakes. For example, AG used the Scots Online Dictionary to look up specific words in Scots. But the content that AG added to Wikipedia was not a true translation because he did not fully understand Scots grammar and syntax.
this version o' firefox is an' a' th' foremaist major piece o' software tae be translated intae scots - a leid spoken / understaun by 1.3m fowk in bonnie scotland.
Mah afore company worked oan th' translation 'n' thay tellt me thay hud fin trying tae come up wi' suitable equivalents fur tekky wurds sic as 'minimise' 'n' 'maximise'.
I've heard the Scots version of Harry Potter and the Philosopher's Stone is really well done, and is fun to read even as an American. That said, I have no experience at all with Scots, so my opinion should have very little weight!
I know his name and who he is. I vaguely recall some of his poems being recited at school.
I like to think I'm a man of the world. I watch a lot of Scottish TV (Burnistoun etc.), have done the NC500 and have plenty of Scottish friends. Of course there is dialect there, but I didn't realise this was a "thing".
There's a whole cannon of literature written in it (Dunbar, Ferguson, Ramsay, MacDairmid, Henryson, Kelman, Leonard, Souter etc.) as well as a bit of a discussion whether the literature reflected or embellished Scots (Burns and MacDairmid both made up words for example). When people think of Scots they usually think of the Glaswegian dialect of English you commonly hear on the TV. But there's still a whole set of words in common use today around Scotland. You hear it more in rural areas and less in the central belt.
That's really a pity. In England there is a terrible lack of knowledge of the other parts of the Union. It's because of decisions like the not quite full on Scots but truly brilliant Limmy's Show only being shown on BBC Scotland.
It's a self-reported stat (census answers), so that many of people at the very least believe they know what Scots is (as they responded that they do speak it).
I did find it striking that the same stat for Gaelic was just over 50k in contrast: while I know the level of Gaelic spoken in Scotland is extremely low, it's at least a better known language internationally than Scots is, so I would've expected it to be the more spoken of the two.
Most people know a little bit. That's the thing with Scots, it's highly miscible with English so it's hard to tell where one ends and the other begins.
In fact, literally the only reason it's called "Scots" and not "Inglis", as it originally was, is as the Lowlander Scots gradually developed a sense of national identity separate from the English, they decided that they wanted a national label of their own. But of course, they still didn't want to share a national label or identity with the hated native Celtic-speaking population.
And so "Inglis" became "Scots", while "Scottis" - the native Goidelic language - became "Erse", or Irish.
It is not unique to this case that how we divide and understand langauges is tied up with politics of nationalism. Have been since the start of modern nationalism. What we call "Italian" could be called "Florentine", it wasn't spoken in all of "Italy" until it became a political project to make it so...
And...
> Until about 1800, Standard German was almost entirely a written language. People in Northern Germany who spoke mainly Low Saxon languages very different from Standard German then learned it more or less as a foreign language. However, later the Northern pronunciation (of Standard German) was considered standard[4][5] and spread southward; in some regions (such as around Hanover), the local dialect has completely died out with the exception of small communities of Low German speakers.
> It is thus the spread of Standard German as a language taught at school that defines the German Sprachraum, which was thus a political decision rather than a direct consequence of dialect geography. That allowed areas with dialects with very little mutual comprehensibility to participate in the same cultural sphere. Currently, local dialects are used mainly in informal situations or at home and also in dialect literature, but more recently, a resurgence of German dialects has appeared in mass media
What is the thing you think needs to be "resolved"? If it's dispute over the names of languages, I'm not sure that was the nature of any dispute in these 19th century examples, or if it did, if it was ever "resolved" by anything except power to impose it.
I'd suggest not, it is a peer / sibling of Modern English, and descended in parallel. Northumbrian Old English eventually became Scots, due to the 'English of the Lothians' using it (and eventually 'Inglis').
Go read some older Scots from around 1600, you'll probably have a harder time of it than the same age English because they were and are distinct. Modern media, a lack of formalised spelling, and simple economics post union has probably been the major factor in its slow decline towards death.
So Scots (in its various dialects) and Geordie/Mackem/Northumbrian are I'd suggest dialects of the same language, not being English. Speakers code switch between them.
You've also missed out the other language, which was spoken in the 'Old North' and the Kingdom of Strathclyde - i.e. the Brythonic speakers.
I lot of people speak many Scots words day to day, but won't necessarily consider themselves to "speak Scots", or even realise that the dialect they speak has an official name.
To test this theory, I just asked my mother in law (who is from the central belt) about Scots, and she replied, quite seriously: "Whits that? I dinnae ken whit that is, I spik proper!"
To be fair, that number is self-reported and combines "speak, read, write or understand Scots".
I'm not Scottish myself, but even I could claim to somewhat understand Scots. I wouldn't say so on a census, but I'm sure there are plenty that would. Especially when there's some national pride at stake.
Not at all Catalan is the first language for a large part of the population. In Scotland, even for most native Scots speakers English is in fact the first language.
Well, this example is quite curious. Catalan is understood and used by most of the people who live in Catalan speaking regions, but it isn't the first language of most of the people who live there. Pretty much the 95% are bilingual (Catalan and Spanish).
In fact, more people speak Spanish as their mother tongue than Catalan but they switch between them when required.
Maybe the Scots situation is similar, people learn and use both languages and change to the one they feel most comfortable with.
Scots is a dialect of English. So finding idiomatic Scots expressions for technical terms, instead of importing them verbatim, really is about "having fun" rather than achieving any extra clarity in communication.
Describing Scots as a dialect of English is really about a political affiliation than any interest in linguistics, Plenty of people regard Scots as a distinct language.
I don't have any affiliation with England, or the US, and would also consider "English is a dialect of Scots". (More than one person in Galicia described the Portuguese language to me in the analogous way...).
Still, I think it's silly to go all kayfabe here and treat the languages as completely distinct. I have similar thoughts on Slovenian and Slovakian and Flemish.
People that have a vested interest in the suppression of the indigenous Goidelic language of Scotland - on both sides of the England-Scotland border - will always insist upon the full-fledged distinctive language status of Scottish English.
Unless I misunderstood your point, that is a strange take that isn't close to being true. There is a large overlap of Scottish language enthusiasts who are advocates of both Scots as a distinct language and Gaelic, the demographic of the recent surge of popularity of Gaelic on duolingo are clear.
The overlap between political commentators who both insist that Scots is "just a dialect" and that Gaelic is a dying language that we should discourage is also very apparent.
The subtext is pro indy people are generally pro Scots and Gaelic, and unionists are against both of course.
Scottish English != Scots. The former is just English with a Scottish accent; the latter is a closely related (to English) but distinct language with its own vocabulary and grammar, not dissimilar to the relationship between Norwegian and Danish, or Czech and Slovak.
Scots being a language has nothing to do with suppressing Gaelic. Generally people who hate Gaelic hate Scots equally.
The fact that you and other anglophones call the indigenous Scottish variety of Gaelic simply "Gaelic" is a pretty good example of why I continue to be very, very suspicious of those who insist upon "Scots" being a language fully distinct from English, and not a dialect - and insist upon calling it by that name.
The Irish and Scottish varieties of the Goidelic language family have far less mutual intelligibility than the English and Scottish varieties of English. Scottish English forms a pretty smooth continuum between "English with a Scottish accent", and what you'd call "Scots" or "Lallans".
But Scottish Gaelic is the tongue that gets the downgrade to "Gaelic", despite it being simply called Scottish for the vast majority of Scotland's history. Despite it literally being the reason for the country's name.
Scottish English was literally only called "Scottis" instead of "Inglis" as the Lowlanders gained a greater sense of national identity and distinctiveness from the English further south. At that point, funnily enough, the Goidelic spoken in Scotland ceased to be called "Scottis", and became "Erse" instead.
It is quite impossible to separate this insistence on distinguishing "Scots" from English, from suppressive efforts towards the indigenous Gaelic language of Scotland. You can see the exact same dynamic in Northern Ireland, where unionists play up the supposed variety of "Scots" spoken by the Ulster planters and their descendants as a fully distinctive language equal to Irish, as a means to delegitimize Irish as the primary indigenous language of the land.
I don't say all of this from a place of antipathy towards the speakers of "Scots". One need only read some Burns to see that the variety of English spoken in Scotland diverged heavily from the varieties spoken further south, and that diversity is beautiful. But the label is politically charged, and fundamentally it is a weapon - and always has been - pointed in the direction of Gaelic-speakers.
On both sides of the Irish Sea, too. Hard-line unionists in the north of Ireland have been pushing "Ulster Scots" in the last ten years or so. Not out of any real cultural association with the language or with Scotland - they overwhelmingly identify as "British" - but as a tool to diminish Irish-language initiatives. Every time there's a measure proposed to support Irish, they can propose an equal amount of funds for Ulster Scots.
It actually helps them to make the language seem as ridiculous as possible, since the real goal isn't to promote their language but to mock another.
It's up to the linguistic community to decide that, if their variety should be considered a "dialect" of something else or a "language" on its own. Linguists already gave up that question, it's more useful to talk about varieties anyway.
And it's the same deal with Galician versus Portuguese, with a difference - "Scots is a dialect of English" threatens Scots, but "Portuguese is a dialect of Galician" doesn't threaten Portuguese (it threatens Galician instead).
"Or whether Danish and Norwegian and Swedish are the same language, but different dialects."
I recently watched the Swedish production "Blue Eyes" (with English subtitles), and was amused at the amount of speech which struck me as being 'English with an odd regional dialect'. Usually these were simple 'core language' statements and / or imperatives.
I guess that is more the lasting Viking and Dane Law impact upon the English, than English feeding in modern Swedish.
>I guess that is more the lasting Viking and Dane Law impact upon the English, than English feeding in modern Swedish.
It might be the result of common Germanic grounds, not necessarily lateral influence. I got the same when learning German - sentences like "das Haus ist rot" or "ich trinke Wein" are surprisingly easy to catch up from English. And after some time you start noticing patterns, that help you further.
Fun increasingly off topic facts: a lot of those patterns were originally noticed and compiled by the Brothers Grimm (noted assemblers of fairy tales from across Germany) as they got caught up in the pattern of differences between Low German (the language families that include Dutch and Old English) and High German (what today we think of as the German language) as they assembled all the local fairy tales they could find. High German went through a consonant shift [1] that Low German did not. A lot of the pattern you can see when learning German and knowing a lot of older words in English is applying exactly that consonant shift, plus or minus English's own interesting Great Vowel Shift [2] and large influx of latinate words from French and other languages. (The Brothers Grimm even traced some of the shifts as far back as they could to proto-Germanic, making them some of the first explorers of Proto-Indo-European [PIE] sound shifts and Grimm's Law is named after them. [3])
The evolution of languages is fascinating. Circling somewhat back to the topic above: the difference between "dialect" and "language" is a complex subject just as most "speciation" debates in other evolutionary fields have a lot of hidden complexity. "Language" versus "dialect" versus "creole" doesn't have a lot of simple answers though historically that joke that "a language is a dialect with an army" tracks more than it doesn't which is why it is a good joke.
Yup. And theoretically, Grimm's Law would allow you to find those patterns even between some random Germanic vs. Romance/Latin pair; like e.g. plenus/full, tres/three, head/caput. Too many changes piled up to be useful though.
(What I find really funny is that some people show some sort of intuitive awareness of those regular sound correspondences, when dealing with closely related languages. I don't recall this among EN/DE speakers, but it's all the time among PT/ES ones: either joking "swap O with UE and you get Spanish" or "drop random consonants and you get Portuguese". Cue to "quiero una cueca cuela y un sorviete" pseudo-Spanish.)
Among the three you mentioned (language, dialect, creole), at least creole is well defined - it's the resulting evolution of a pidgin becoming a full-fledged language. At least in theory, because in practice we get partial creolisation and decreolisation of varieties.
I don't think creole is that well defined either: from certain perspectives Late Middle English was a creole of Early Middle English and French, the border zone between when Late Middle English could easily be considered to have been a creole versus where Modern English is definitely not regarded as a creole is really tough to define with all sorts of weird answers (from "it was never a 'true' creole because England still had an army on paper during the Norman Conquest" to "it stops being a creole when you have an empire and colonies are building their own creoles of your language" and all sorts of other ideas).
Late Middle English was not a creole, and that is not a matter of perspective - it's just a descendant of Early Middle English. A bunch of Norman borrowings didn't change that.
A creole is by definition the descendant of a pidgin, a patchwork of words "glued" with some ad hoc grammar, that looks nothing like the grammar of the parent languages. A good example of that would be the Jamaican Patwa basilect:
* Dem a kuuk akara fi im (lit. "them are cook akara for him"; "they're cooking akara for him/her")
* Im a kuuk akara fi dem (lit. "him are cook akara for them"; "he/she's cooking akara for them")
Even if most words are clearly English (except akara - a fritter), the grammar looks nothing alike. It was rebuilt from the scratch. We can't really say the same about EME vs. LME, where there's a clear transition from one to another.
>from "it was never a 'true' creole because England still had an army on paper during the Norman Conquest" to "it stops being a creole when you have an empire and colonies are building their own creoles of your language" and all sorts of other ideas
The presence of an army or "metacreoles" is irrelevant. Kreyol for example would still be a creole, even if the Haitians built a thassalocracy.
What matters is the presence of a linguistic community, that kept speaking their language as they always did. Normans only replaced the local nobility, but the Germanic speakers were still there - speaking among themselves in their Germanic varieties, even if they had to butcher a "pig" or a "cow" because of some fancy noble wanting "porc" or "beof".
Danelaw had some _significant_ impacts on English as a language: the third person pronouns (they/them/theirs, etc.) come from Old Norse and supplanted the existing Old English pronouns. To borrow many words is one thing (including common items like "egg", cognate with Swedish ägg), but to borrow pronouns shows some pretty profound shifts in the language. Regardless, this—combined with the loss of inflection, which is typically attributed to the Norse influence—shows how extensive the influence of Old Norse was.
There's definitely some similarity between the two Germanic languages, but the North and West Germanic languages had started to diverge by the point of Danelaw, though the Battle of Maldon does record the languages as being mutually comprehensible at that point.
Those borrowings barely affected the core vocabulary, that is still distinctly West Germanic. "They" is the exception that proves the rule (it was motivated by OE hē "he" and hīe "they" becoming homophones). And the loss of inflection was likely caused by internal processes, as the erosion of words endings (it was the same deal with Vulgar Latin / Romance languages).
And more importantly: I don't think there were a lot of sound changes triggered by Norse influence, and those are the most relevant factor behind mutual intelligibility. Some odd non-core vocab here and there is easy to skip, and still get the "rough" meaning of a sentence, and speakerers cannen sentencen still understanden, eben mit somes oddes endinges.
> Modern Scots is a sister language of Modern English, as the two diverged independently from the same source: Early Middle English (1150–1300)
> As there are no universally accepted criteria for distinguishing a language from a dialect, scholars and other interested parties often disagree about the linguistic, historical and social status of Scots, particularly its relationship to English. Although a number of paradigms for distinguishing between languages and dialects exist, they often render contradictory results. Broad Scots is at one end of a bipolar linguistic continuum, with Scottish Standard English at the other. Scots is sometimes regarded as a variety of English, though it has its own distinct dialects; other scholars treat Scots as a distinct Germanic language, in the way that Norwegian is closely linked to but distinct from Danish.
You are probably thinking of Scottish English or Scots English, which is essentially English of some words and phrases from Scots and a very strong accent.
Scots proper is as much a language of its own as English is.
Scots and what we now think of as English arguably have a similar age and a lot of shared heritage, though obviously given how much separation, invading and other reasons for variation & remixing of languages has gone on over time, it is tricky to tie down completely what came from where when.
I hate how Safari on iPad does not have proper Cookie handling…
The only solution is to permanently use Safari in “private mode”, but it has some limitations (you cannot remember your browsing history, and you can’t set a default Font size for example.)
Firefox on iOS can do this, but you can’t set a Font size at all. So many websites (like Hacker News) are nearly impossible to read on an iPad.
Gosh, I'm happy I'm not the only one. I have 20/20 vision, yet I have to turn up the HN font size on all my devices. It's the only site I've had to do this.
I do have a high DPI display, and HN is ridiculously tiny without manually overriding, because it's setting text size in pixels instead of points, for no good reason. It doesn't make any sense. View source shows:
line-height:12pt; height:10px;
Why use device independent units for line-height, but not the text itself?
That's jlarocco's comment. He goes on to say that was done for "for no good reason", which suggest he's opposed to using pixels and prefers using points instead.
The rest of his comment gave me the impression that he prefers using points to pixels because:
1. on a high dpi display, each pixel is smaller
2. css pixels correspond to physical pixels
therefore using "px" in css would result in small text, whereas points presumably wouldn't have this issue.
>> "px" in css doesn't correspond to literal pixels on the display.
That's my comment. I mentioned this fact to correct his prior assumption that he thinks "px" in css corresponds to physical pixels on the display. I didn't explicitly state that 1.33px = 1pt because I couldn't find the exact reference for it, but I did go on to state it later.
>You ”justified” (I don’t know the intent of your comment, really) the use of points because pixels don’t match physical pixels.
no, I justified the use of pixels, because they were in fact independent of the physical pixels on the display, contrary to what jlarocco thinks. If they did in fact correspond to physical pixels on a display, that would be a defect on high dpi screens, because it would result in smaller text.
Thank you. It seems "px" is really the worst of both worlds, then.
It's not device independent like "pt", it's not what most people expect it to be (one device pixel), and there's subjective "wiggle room" in what it actually means.
>It's not device independent like "pt", it's not what most people expect it to be (one device pixel), and there's subjective "wiggle room" in what it actually means.
This seems to also be incorrect. px and pt are both absolute units, and 1.33px == 1pt. If you want relative units you need to use something like em.
em and rem are relative to properties on other nodes in the document's CSS hierarchy; they're not relative to some screen-size specific metric.
I've not studied the topic in any depth, but I believe that an adaptive ruleset would just use CSS media queries (use this font size when viewport width is more than something). That is what Bootstrap does. Or, use viewport-relative units like vw, vh, vmin, vmax, but I doubt that would work well.
1rem is the font size of the html element, which if you don't set it otherwise, is relative to user-agent decisions about what 1rem is. Which could be related to screen size or density, but it's up to the user-agent, I don't know if any adjust it so automatically by default (although it would make a lot of sense).
It is, in actual practice, based on user preferences though, if you set your browser to display fonts bigger, 1rem will be bitter. 16px or pt... might not be? So hard to keep track, this stuff is such an accretion of workarounds on top of legacies on top of odd choices.
>they're not relative to some screen-size specific metric.
The fundamental problem here is that the browser can only adjust for device pixel density and not other variables that affect visibility (eg. the viewer's visual acuity or the viewing distance). That said, using absolute units is still the best choice for text size, considering the other relative units (eg. relative to viewport size) is worse.
>I've not studied the topic in any depth, but I believe that an adaptive ruleset would just use CSS media queries (use this font size when viewport width is more than something).
HN has this. See the /* mobile device */ section in news.css.
I have a 123ppi display, normal is ok but I have it zoomed out to 80 or 90%.
People are different, so it's good to have font size and zoom options. Some can go bigger, some can go smaller. I use 80% on a lot of sites, and an extension called 'Zoom Page WE' to remember the settings.
Because HN is not "designed" as much as it's just been written by dinosaurs who visit it via Lynx. My browser is currently at 150% and I'm basically a millennial.
The smallest font on this page is 9px, that's just ludicrous.
I mean just open the CSS file and judge for yourself.
- specify which websites may store cookies/cache. All websites not specified can not store anything (and thus not track me), and all data for these websites is deleted once i close the tab
- i want to remember all my history
I can do this in Firefox on macOS (with some container extensions, can’t remember the names now)
I started using the DuckDuckGo app on iOS to do exactly this. You can choose to "fireproof" any site you're on, adding it to a list of sites that's immune to clearing all data.
I don't think it has a history feature at all though.
Safari used to have a feature where you could delete all cookies except for ones you had explicitly marked to keep. So, for example, I might keep ones like my bank login name and my amazon history, but very easily remove everything else. I used this all the time. Even better would be the option to delete all but selected cookies every time I closed Safari.
I'm a heavy user of the "firefox containers" feature, where I try to isolate the social media sites to their own containers (twitter, facebook etc) and also one for reddit. I've also got one for google products as well as I try to use DDG most of the time.
Anyway I wish FF had a feature that broke down the cookies PER container, so I could purge any ones that might have snuck in due to a lapse in my judgement, e.g. if I see facebook cookies in my "twitter" container then I'd like to purge them for that particular container only.
FF only allows you to do a global purge.
EDIT: I can see this bug was raised 3 years ago which suggests it _used_ to be a feature that got removed, but sounds like it was never put back in/low priority/WONTFIX. https://bugzilla.mozilla.org/show_bug.cgi?id=1480175
Cookie AutoDelete deletes cookies when you close the tab, right? That's pretty different from this feature, where cookies are only deleted when you ask the browser to delete them.
Nice feature, although I wish firefox could fix the other side of the cookie nightmare: auto-fill the (now humongous) forms we need to fill in for our cookie settings, for each website. I feel like I am applying for a mortgage a few times a day now.
I think by law they have to make it as easy to opt out as to opt in. So for almost all the websites I see, clicking 'More options' defaults to everything off. I now do 'More options' > 'Save' on autopilot (and if one or two bad 'uns slip through the net, so be it - life's too short).
The only problem is that there is often an option to allow cookies to "store site preferences" or something along those lines.
My experience is this also includes your cookie preferences which means if you don't enable that single option, you'll have to go through the steps to disable cookies pretty much every time you visit.
And NOYB has started pushing for more enforcement, so it's likely sites will become more compliant.
It turns out that those third party "consent form in a box" solutions tend to have settings to let the website operators choose how user-hostile the popup is supposed to be. It's a shame the DPAs are all understaffed, incompetent, unwilling, or willfully looking away (e.g. in the case of Ireland) instead of taking expensive enforcement action against companies that violate it.
A few expensive examples and every site would have a top-level, equally visible "reject all" button, and after a while, sites would realize that with 90% of people choosing that, they might as well skip that popup and assume rejection.
Those popups aren't mandatory at all, sites can simply respect your privacy by default.
> Now, if you click on Settings > Privacy and Security > Cookies and Site Data > Manage Data, Firefox no longer shows individual domains that store data. Instead, Firefox lists a cookie jar for each website you have visited.
How is data from previous versions of Firefox handled? Will data from ad networks be listed as a "website you have visited", made unavailable for the embedding site's cookie jar, and re-fetched upon the next visit?
Yeah, I'm guessing this was a deliberate joke for us here. The author knew this would be closely read and critiqued by the folks here. I'm sure s/he was waiting for exactly your comment on the HN thread.
And this is also the release where they drop the support for disabling "Proton" UI and therefore Firefox will no longer respect my system color scheme, and I will lose even more vertical space. Thanks...
Noooooooooooooo. I just updated and ran into this as well. It is still possible to enable the compact mode from about:config but it looks like the new tab style can't be easily changed anymore.
>cookie handling that lets you fully erase your browser history for any website.
how about allowing me to whitelist and blacklist cookies from a button? why did that feature have to disappear in the first place? instead I now have a menu in about:preferences#privacy that requires a full URL to be entered, added with a button, then confirmed with "save" in what appears to be an effort to get me to just accept cookies.
whats worse is if i switch between allow cookies, and then back to custom, my selection to block all cookies isnt honored at all. instead i get put back into 'block third party cookies.'
finally theres the misery of including blocked sites in the 'preferences' you can delete as part of your browser history, which seems like an effort to further reduce my predictable and consistent ability to block cookies altogether.
> lets you fully erase your browser history for any website
I just want a button to whitelist a domain and an option to automatically clear 100% of everything outside the whitelisted domains on every restart.
And always clean the cache, perhaps even for whitelisted domains unless the system is on a metered/slow connection.
Also, every website should always be opened in a separate "container" so cross-site tracking won't work.
If Chrome did that today this would trigger a cascade of consequences. If Firefox did that today it would just improve Firefox popularity and cause no problems.
As a number of people above mentioned, Cookie AutoDelete extension for FF does exactly this.
Sites start out default clearing all cookies the moment you clear the tab.
Greylisting clears them when you close the browser.
Whitelist retains all cookies.
Multi-account Container and Temporary Container extensions take care of your per-tab container needs.
Although I use Cookie AutoDelete and open GMail (which I didn't whitelist) in a separate container, GMail still remembers me somehow. I have to manually go and clear all the saved data in the Firefox settings to actually reset everything.
It sounds like you're hitting a Firefox bug affecting Cookie AutoDelete. Something about the recent changes to cookies has prevented extensions from clearing partitioned cookies.
Great, this is really going in the right direction! So far I had been using different firefox startup profiles for different activities to emulate this behaviour (works even for extensions).
An observation from their animated demonstration: the recent history includes sites like Facebook, Google, Reddit, and Twitter yet the site they choose to forget about is Hacker News.
It was worth a chuckle.
As a heavy user of Multi-account Containers, I will be interested to see how this feature interacts with it. I use containers to maintain multiple profiles on websites and loosing all the data for those websites after clearing cookies can be frustrating.
The benefit of this also fixes bugs. Weather.com bugged on me because it uses supercookies, it would send me to the wrong link when I clicked in the search bar for my saved location. Had to manually go through and delete all the copies of the same cookies in local storage and other places they put it.
The next step is for Firefox to finally adopt torbrowser, and natively support not allowing fingerprinting by default.
Offtopic, but since you brought it up and outed yourself as a user, what's the appeal of weather.com vs weather.gov? The .com collects and sells your data with 3rd parties and I'm guessing they get much of their data from the national weather service anyway. Is there some service/feature that draws you to them?
It sure takes a lot of work from the rest of the community to reduce google and facebook's ability to keep us under constant surveillance. My personal relationship with facebook feels fully abusive at this point.
I'd love to see a toggle button on the URL bar for every website.
By default it's off, and it means that cookies as deleted as soon as I close the last tab for that website.
Clicking it whitelists the website and cookies are retained until I turn it off again.
This is kind of like the approach we had in the nineties. You used to get a prompt for each website, asking if you wanted to allow cookies or not. This is like a second iteration on that.
This is pretty much exactly what Cookie AutoDelete does. I love it. Although there's a Firefox bug currently that's preventing many cookies from being deleted which has been annoying, but should hopefully be resolved soon.
Great. Finally. I always wondered why some sites reopen on data relevant to a past date, (weather web site for example), even though all history, cookies, site data, etc, has been set to clear upon shut down. Which always makes you wonder what else is stored.
Firefox doesn't have enough market share/power to dictate terms to website owners; if they just disabled third-party cookies, websites that broke would just tell users to switch to Chrome.
Chrome isn't going to tackle tracking/fingerprinting for obvious reasons.
Can anyone tell me why since past version 68 of Firefox on Android nearly all my add-ons don't work. Is there an alternative for "I don't care about cookies" . Add-ons working on mobile in Firefox was the main reason I'm still a Firefox user. They have pretty much killed that by breaking compatibility.
The reason is that Android part of Firefox got a complete rewrite when they switched to the new engine. They switched from a deeply integrated system over to a more separated frontend for GeckoView, a generic Webview component based on Gecko. This change implied a lot of changes, particularly to the UI and framework surrounding the existing addon code.
Secretly, a lot of addons will run just fine. You can install them in the Firefox nightly through the "secret settings" (tapping the Firefox logo in the about screen seven times) by creating an addon collection and stuffing the right ID in your browser.
I can say the new engine is notably faster and the UI is easier to use for basic tasks, but all of the features that made me switch to Firefox on Android in the first place have been removed. Slightly nonstandard features ("being able to use your own CA" or even "being able to ignore TLS warnings") took years to implement, and logging into a website with a client certificate is still not possible.
They even took about:config from us in the stable builds, because they consider their users babies that will change random settings and break something. Firefox has dropped all support for power users and has focused on becoming Chrome 2.0, a goal which I don't think they'll ever be able to accomplish. If you don't follow the standard workflow of the 80% who forget to disable Mozilla's stalking, you're no longer important.
I'm still on Firefox but every day I'm nudged closer to just switching to Bromite instead. The lack of proper addon support was understandable at first, but by now I hoped to have some decent addon support back already. I guess the team working on it must've gotten culled so Mozilla's CEO could afford their pay raise.
>Can anyone tell me why since past version 68 of Firefox on Android nearly all my add-ons don't work
Because they switched rendering engines or something. Now addons are restricted to a small subset that they've validated. You can use a custom addon collection to install untested addons (see: https://blog.mozilla.org/addons/2020/09/29/expanded-extensio...) to get around this, but there's no guarantee that the addons will work.
I can confirm the add-on I Don't Care About Cookies works just fine if installed that way. It's just inconvenient. I wonder what stops the list of approved add-ons from growing.
It also broke font sizing/rendering on sites like reddit.
I loved Firefox mobile and this did me dirty. One of the big draws was adblock, and on top of needing text and extensions they changed the UI to be antiproductive.
Their playstore ratings took a massive nose dive after that release. Shame. They are the only real browser competition to Chrome.
Does the font scaler in Firefox help? I actually switched to Firefox mobile several months ago because that finally resolved my issues with browsing i.reddit.com on Firefox. Chrome would just get the font scaling right, Firefox wouldn't before the major engine change.
I am annoyed that Firefox mobile tabs seem to have to refresh every single time I "tab out". I'm stubbornly sticking to it because of addons though (Dark Reader and uBlock).
The font scaling was available in the previous Firefox on Android (i.e. ≤ 68), too, but it had been disabled by default for a long time because while it does make text more readable on pages written without mobile phones in mind, it can also cause some layout breakage on some pages, and there was a little tug-of-war between people preferring the former even at the cost of some possible layout breakage, and those wanting to avoid the latter even at the cost of unreadable text.
If you knew about it, it could still be re-enabled through the regular settings, though.
After having fixed a few bugs in that regard, I pushed for giving re-enabling it by default another try with the rewritten browser, and so far that decision luckily (from my point of view) seems to have stuck.
Additionally, it has recently turned out that for pages specifying an explicit desktop-sized viewport (i.e something like meta name="viewport" content="width=1024", as opposed to either using nothing at all, which gives the standard desktop-size viewport of 980 px, or "width=device-width", meaning it's a mobile-friendly responsive layout), there was a long-standing bug meaning that the font scaling for desktop-style pages was erroneously being deactivated on xxhdpi-phones.
This latter bug affects the desktop versions of both Reddit and Slashdot for example, as both of those are using an explicitly sized viewport. It has now been fixed in Firefox 93 (https://bugzilla.mozilla.org/show_bug.cgi?id=1685756)
Damn it Firefox. Your cookie protection system is too damn interrupting and does not provide good enough protection.
I don’t want a security “profile” because I don’t fit in to whatever few boxes you have setup. Or maybe I just don’t trust what you do behind that security profile setup.
I want my own granular cookie tracking. Steal it from chrome if you have to. It is the best thing since sliced bread.
I want a list of every cookie I have got. Just like IE used to do. Just chrome does today.
I want to set in the smallest detail which cookies are allowed, which are blocked, and which only last until I close session.
I have umatrix and ublock with only my personal filter list. It is not good enough. I want something much like chrome.
"Cookie Autodelete" plugin might be what you want, then. Sadly I don't know what "like chrome" means, so don't know if the plugin's functionality is more-or-less equivalent, since I never use Chrome.
Typing this from Firefox, been typing from Firefox since 2002. Never moved to Chrome. That's probably a small group of people for sure, and I'm actually happy with this update, and the changes Mozilla has been making. I do think, completely against the grain of most Firefox fans, that they should've moved to Chromium ages ago. That would solve the reason people are actually leaving.. issues with Google properties. I have my mother in-law on Edge because there's less issues when she jumps on her karate class using Google Meets. There should be someone out there to battle Blink, I suppose, but without user share at all, how do you hold any weight anyway? It really doesn't make sense, the argument people make.
It's usershare first, then you have weight to put towards web standards. And I'm afraid Mozilla doesn't have the resources and willpower to fight that battle anyway. If I'm calling shots with Firefox I'm moving to Chromium immediately and then focusing on UI and privacy features. It's probably too late to make that change though. I just read Firefox lost 50 million monthly users in the last two years.
It's a little sad, there is room in the market for a 3rd party, power user's browser but it's obvious as can be that whatever browser that will be- it will be on Chromium and it won't be Firefox.
This seems exactly right: now that we have partitioned cookies, cookie clearing should clear cookies for the whole partition.