I've yet to see a normally configured browser _not_ be uniquely identifiable many times over through fingerprinting.
At some point it feels like trying to drain the ocean with a cup. Maybe we just need to accept that anyone who really wants to fingerprint you _can_ fingerprint you unless you use a specialist browser.
At that point the solution is fairly obvious, make it legally difficult to use unique fingerprinting and move on (ie stuff like gdpr). People will still do it, but they'll have to balance it with not falling foul of the law and wont be able to abuse it too much.
We wont stop real world facial recognition by all trying to make our faces more similar either, we have to accept it's generally possible to do, but discourage the actual doing of it rather than trying to make it impossible.
(note in both cases, actually preventing it when you have a reason to is totally possible and valid, via specialist browser modes and physical masks respectively)
I just tried a clean FF profile with resistFingerprinting enabled. No dice. Everything adds only very few bits of identifying information (unlike my main profile which is already almost unique thanks to the accept header (English, then German)) yet it still results in 17.75 bits which according to EFF is unique.
I’m agreeing with you, though I wonder, is there any way to not be unique? What would you have to do? Use Windows with no extra fonts, Chrome in English, on a FullHD monitor with webgl/canvas/audio fingerprinting protection extensions?
Actually, resistFingerprinting + switching to the user-agent string tor uses gets me 99% of the way there. All that’s missing is the weird window size (vertical taskbar), if I could get that to report a default size, I’d actually be better than Tor (they have a bunch of responses slightly more unique than FF with resistFingerprinting).
But it’s academic for me anyway, I have
Accept-Language en-US,en;q=0.7,de-DE;q=0.3
which is close enough to unique that nothing else really matters.
> All that’s missing is the weird window size (vertical taskbar)
TBB actually adds a border at the bottom of the browser so the reported size isn't the actual size. If you change the size of your browser window to the tor-reported size then it should work.
Unless I'm misunderstanding and you mean something to do with the scrollbar?
Like siblings are saying, they use all available information to fingerprint you.
You can cover your identity only to the extent that you can display the same characteristics to the web server as the largest group of users that have all the same characteristics. This includes whether you have JS disabled as well as your IP address, User-Agent, display resolution, etc.
It's generally true to be fair, reference counting could be used for every garbage collected language (and be much simpler). The only reason they switched to more complex schemes is they're faster on average. Even smart schemes that try to remove unnecessary ref count changes will tend to underperform compared to a (well built) tracing GC. As for a reference, https://en.wikipedia.org/wiki/Tracing_garbage_collection#Per...
The point about predictability is totally valid though (and combined with simplicity is the reason many languages still pick ref counting).
Technically yes, but if you only use it when you need it, it's often not a performance concideration.
And for systems programming, the often overlooked truth is that people care far less about perfect performance than they do about control - ref counting doesn't give up control to a mysterious oracle running in the background which may or may not wreck your performance in hard to predict ways, it just pays a known cost at the time you use it based on how you're using it. (that's not to say performance is irrelevant, but it's not always the top concern, and with control you can always rewrite slow code as needed).
The obvious question is "why can't we have an optional modern garbage collector built into a systems language?", and it's a good question (I remember reading there was one in rust for a while during early development, but it got removed), I think the main reason is that high quality garbage collectors are incredibly complicated, with many trade offs, and a gc that combines well with the rest of a language and various alternative memory tracking solutions is harder than most. The projects that really want one can always implement their own and choose their own tradeoffs, so there's not many use-cases where a generic language-provided one would justify the complexity of implementing it within the language.
> why can't we have an optional modern garbage collector built into a systems language?
It's possible in C#. Some language and runtime features like lambdas insist on using the GC, but with some care they can be avoided. The usability becomes worse without these features, but IMO that's not a dramatic downgrade.
Many pieces of the standard library in modern .NET don't require managed heap. Instead, they operate on stuff like Span<byte> which can be backed by anything: unmanaged heap, native stack, or even the memory mapped to user space by a Linux device driver (I did it with DRM, V4L2 and ALSA devices).
The textual information of all books ever written is around 10s of TB.
So if the goal is to store all books for a rainy day, you can buy a grand of storage or you can build the largest library on earth.
That said, a few books would be wise in case of severe energy loss (it would have to be very severe to make running something like a kindle prohibative, but if we're playing doomer then I'll accept a couple of shelves would be sensible).
I believe they do not alter the colour in the region being focused on, they're trying to take advantage of the fact that our peripheral vision is primarily rod-based, and thus unable to discern colour.
So in theory, assuming the display can keep up with eye movements, it should never show you a tightened or limited colour range that you are actually capable of detecting through your eyes.
I suspect it'll be subtly detectable in a couple of ways though, either through rapid eye movements (though I admit I have no clue how fast eye tracking/screen response can be), and secondly (more confidently) that your nose will shine green in the reflected glow of all the green pixels they're cleverly shining where you can't see them directly, and you can definitely see your nose.
Whilst your point is fair, I do think the fitt's law benefits are pretty huge (ie the infinitely extending target because it's across the screen edge).
It really depends on usage patterns I guess. If you have lots of _really_ small windows then your point is entirely correct, but if people are using large windows then it becomes less clear - there's a fair chance they're moving almost as far to reach the edge of the window for the in-window menu bar, but not benefiting from the screen edge infinite target, at which point the macos model becomes superior.
The thing is I suspect the cut-off applies even for half-screen windows these days, I think the benefits of screen edge scale better than the benefits of window edge (since you can always move faster if you don't have to aim well). On the other hand, I guess there's an upper limit to practical window size, regardless of how large screens get.
I think fitts law is precisely why it’s a bad design. I don’t use the menu bar very often, so I’d much rather have something else on the screen edge. Like browsers tabs for example, which I use all the time and sit on screen edge on windows/linux.
Do they? It doesn't look at all like that when I google screenshots. It looks like there's some chrome in between the top of the screen edge and the start of the tab. At least in Chrome.
It's crazy to me that Chrome still doesn't support any real equivalent to this; it's 2022 and Chrome (AFAICT) still insists on shoving every single tab end-to-end at the top, squishing them into tiny unclickable slivers, like some absolute fucking maniac. Even Edge gets this right by supporting vertical tabs (albeit with far less sophistication than TST), and it was supposed to be a cold day in Hell before I ever gave a Microsoft-developed web browser any sort of praise.
It's not that Fitts' Law forgot it, but that the designer of "infinite height" menu bars (Bruce "Tog" Tognazzini) didn't foresee today's gigantic and multiple screens, and was designing for the original Mac, whose screen was tiny and singular in comparison!
Linking to an archive.org cache since Tog's web site's https certificate expired.
The Fitts' Law benefits of pie menus are significantly more profound than the Fitts' Law benefits of the "infinite height" menu bar that Tog describes in his classic article, especially on large screens. And they don't result in you moving the cursor far away from where you want to be next.
Fitts' Law says the target acquisition time (and error rate) is related to the target size (larger target = better, so pie menus make all the targets quite large, extending in all directions to all four edges of the screen; menu bars also do, but only in one direction, wasting three out of four screen edges), and the target distance (nearer target = better, so pie menus make all the targets uniformly quite nearby, exploiting all directions; menu bars don't minimize the distance, only exploit one possible direction (up), and you also have to move back too, so it's worse on a big screen).
Also, multiple displays that you can move between throws a monkey wrench into the "infinite screen edge". Which may be one reason why nobody (except for the beautiful weirdo in the thread above ;) ) ever puts a screen above or below another screen, or a menu bar on the left or right edge, even though Windows has always let you do that, but the Mac doesn't.
There is another predictive model similar to Fitts' Law called "Steering Law" that applies to the narrow twisting "tunnel" from your current position, to the menu bar, through the menu and submenus, and back to where you need to be next, that you have to "steer" the cursor through in order to accomplish a task:
>The steering law in human–computer interaction and ergonomics is a predictive model of human movement that describes the time required to navigate, or steer, through a 2-dimensional tunnel. The tunnel can be thought of as a path or trajectory on a plane that has an associated thickness or width, where the width can vary along the tunnel. The goal of a steering task is to navigate from one end of the tunnel to the other as quickly as possible, without touching the boundaries of the tunnel. A real-world example that approximates this task is driving a car down a road that may have twists and turns, where the car must navigate the road as quickly as possible without touching the sides of the road. The steering law predicts both the instantaneous speed at which we may navigate the tunnel, and the total time required to navigate the entire tunnel.
>The steering law has been independently discovered and studied three times (Rashevsky, 1959; Drury, 1971; Accot and Zhai, 1997). Its most recent discovery has been within the human–computer interaction community, which has resulted in the most general mathematical formulation of the law.
Fitts' Law and Steering Law also apply to the forgiving pull-right submenu design that the original Apple Human Interface Guidelines described, which Tog invented, Apple forgot about (but finally rediscovered), and Amazon reinvented, which I was just writing about recently here:
I love pie menus and wish they were used more. What I really like about them is that if they're well implemented, they can be the discoverable version of a gesture system.
You're right, that's a great thing about them pie menus: they are "self revealing", since they show you how to use them.
There are three phases of using pie menus, and they support "rehearsal" to move you smoothly and seamlessly and unconsciously from novice to expert:
1) Novice: Click to pop the menu up. Look at the screen. Read the menu items. Browse the items (by pointing at them, possibly revealing more information or hiding unselected item labels, see the Unity3D pie menus for an example). Each time you use the menu this way, you're rehearsing the faster mouse gesture to make the same selection.
2) Intermediate: Remember which direction the item you want is in. Press and move in that direction, then look at the screen and wait for the menu to pop up confirming that you have selected the right item. When you know you got the right item, release the mouse button to select the desired item. This increases your confidence that you can (unconsciously) move onto the next stage.
3) Expert: Remember which direction you want, and swipe (mouse ahead) in the appropriate direction, even making multiple selections through nested menus. It's not even necessary to look at the menu (you can keep looking at whatever the menu selection affects, like the object you clicked on to pop it up, and the pie menu gesture tracking can provide real-time feedback of previewing the changes during the gesture, before even showing the menu. At any time you can stop moving the mouse and the menu will pop up and reveal the possible and currently selected items and their directions.
Color selection, font style, or size selection menus are great examples of previewing menu selection effects without needing to show the menu, where it's more obvious to just see the effect on the object directly instead of reading about it in the menu, popped up over and blocking your view of the object itself.
Also, the distance as well as the direction can be used as a parameter, like a 2-dimensional font "pull-out" style (direction) size (distance) pie menu.
Linear menus with keyboard shortcuts suffer from the fact that the keyboard shortcuts are totally different actions than using the menus the slow way with the mouse, so slowly using linear menus with the mouse is NOT rehearsal for quickly using linear menus with the keyboard, and all your time using linear menus the slow way is wasted, instead of rehearsing to use them the fast way every time you use them the slow way, like pie menus.
Pie menus also support "browsing" and "re-selection", as opposed to marking menus or gesture recognition. All possible pie menu gestures are valid easily understandable selections, while with marking menus and gesture recognitions, most possible gestures are syntax errors.
So you can move the mouse into and around the pie menu (even before it's popped up) to browse different items (seeing their effect on the object you clicked on, instead of the menu popping up and blocking your view of it). This enables you to correct mistakes, as well as simply browse and see all the available options by pointing at them (possibly revealing more detailed descriptions, like the Unity3D menus).
The space of all possible gestures, between touching the screen / pressing the button, moving along an arbitrary path (or not, in the case of a tap), and lifting your finger / releasing the button. It gets a lot more complex with multi touch gestures, but it’s the same basic idea, just multiple gestures in parallel.
Excerpt About Gesture Space
I think it’s important to trigger pie menus on a mouse click (and control them by the instantaneous direction between clicks, but NOT the path taken, in order to allow re-selection and browsing), and to center them on the exact position of the mouse click. The user should have a crisp consistent mental model of how pie menus work (which is NOT the case for gesture recognition). Pie menus should completely cover all possible “gesture space” with well defined behavior (by basing the selection on the angle between clicks, and not the path taken). In contrast, gesture recognition does NOT cover all gesture space (because most gestures are syntax errors, and gestures should be far apart and distinct in gesture space to prevent errors), and they do not allow in-flight re-selection, and they are not “self revealing” like pie menus.
Pie menus are more predictable, reliable, forgiving, simpler and easier to learn than gesture recognition, because it’s impossible to make a syntax error, always possible to recover from a mistaken direction before releasing the button, they “self reveal” their directions by popping up a window with labels, and they “train” you to mouse ahead by “rehearsal”.
More comments from the HN discussion of "Visualizing Fitts' Law":
>Pie menus benefit from Fitts' Law by minimizing the target distance to a small constant (the radius of the inactive region in the menu center where the cursor starts) and maximizing the target area of each item (a wedge shaped slice that extends to the edge of the screen).
>They also have the advantage that you don't need to focus your visual attention on hitting the target (which linear menus require), because you can move in any direction into a big slice without looking at the screen (while parking the cursor in a little rectangle requires visual feedback), and you can learn to use them with muscle memory, with quick "mouse ahead" gestures.
[...]
Updated links from that old message:
An Empirical Comparison of Pie vs. Linear Menus (Published at AMC CHI '88):
The Design and Implementation of Pie Menus: They’re Fast, Easy, and Self-Revealing.
(Originally published in Dr. Dobb’s Journal, Dec. 1991, User Interface Issue, cover article.)
Of course not all pie menu implementations support all the useful features, and some "ersatz pie menus" are really horrible because they are not designed to respect Fitts' Law, and only copy the surface features (like being round) without any of the essential properties or advanced features:
- popping up centered on the cursor, which starts out in the inactive center
- large pie slice shaped target regions extending to the screen edge, instead of only selecting items by clicking on their small labels
- delimiting item and submenu selection with mouse clicks, not distance of motion or pausing motion
- never introducing mandatory pauses and waits, or requiring you to always engage in a visual feedback loop, looking at the screen and moving the mouse and waiting until you see something before continuing (but that should always be possible if you don't mouse ahead)
- click-move-click as well as down-swipe-up gestures
- reliable, dependable mouse-ahead, even when the computer is lagging behind and freezing
- submenu navigation, back to parent menu, cancel all menus
- re-selection and browsing
- pop-up display pre-emption (don't pop up the menu window until the user stops moving the mouse, or releases the button to click up the menu even without pausing)
- real time live feedback hooks so the application can preview the effects of the direction/distance selection that you would get if you released the button
- live feedback before popping up the menu, so the popup menu doesn't overlap the thing you're editing with the menu
- live tooltip or overlay label and description feedback during gestures, and hiding or dimming labels on unselected items
- overflow linear target based items (to support greater than 8 items), laid out below like a drop-down menu, or in any other direction: up, left, right, diagonal, or custom layouts.
- instead of pies directly containing some number of items (which makes it hard to visualize the changing directions when adding and removing items), pies contain a fixed number of slices as independent containers of items, which define the directional layout before adding any items, and support empty slices (like a dummy no-op, to fill out a shameful 7 item menu to glorious 8), and slices with multiple items (with a linear layout showing all items in the slice, or a "pull-out" combobox showing one at a time as you move further out, like font size, etc)
- user defined and dynamically editable pie menus, submenus, items, and other user interface widgets, like the Blender pie menu editor does so well. Blender Add-on Review: Pie Menu Editor: https://www.youtube.com/watch?v=cQWwbBFQPrY
Usually people who design and implement "ersatz pie menus" make mistakes or omissions because they don't know better, don't have enough time or resources, haven't used pie menus themselves, or haven't studied Fitts' Law.
But unfortunately enough, there are actually people who purposefully implement crippled straw-man pie menus, even though they know better, just to acquire illegitimate software patents and spread FUD about pie menus.
Here is an example of a purposefully crippled design of pie menus that Gordon Kurtenbach implemented and published and claimed to be "typical", presumably in order to support false claims in an illegitimate patent on marking menus that he and Bill Buxton were granted, which has inhibited their adoption by making many companies and people afraid of using either marking menus or pie menus.
This video incorrectly labels them "Typical Radial Menus", but they are definitely not -- it's like they're designed to be as obnoxious and unusable as possible, aggressively triggering unsolicited submenus after you've only moved a small distance and before you've even clicked.
Demo of Marking Menus: A demonstration of the differences between marking menus, linear menus, and pie menus is shown. Shows the "marking" property of marking menus and the property of scale independence. By Gordon Kurtenbach.
>Don Hopkins, 10 years ago:
These "typical pie menus" are not at all typical -- they're just "straw man pie menus". Typical pie menus (like the pie menus in The Sims) don't behave the way this straw man implementation demonstrates, and don't suffer from the disadvantages demonstrated here. Typical pie menus support "mouse ahead" gestures and scale independence, and it's disappointing that the authors of this video weren't aware of that, and attempt to define marking menus in terms of a straw man definition of pie menus.
Unfortunately in another 10 year old discussion I recently ran across about "Crowd funding project: Marking menus for Photoshop and other software," someone watched and cited that deceptive video, and was mistakenly convinced that "Pie menus are infinitely less powerful than Marking Menus, and absolutely not the same thing."
That's why I believe the whole point of that video with terribly designed fake "Typical Radial Menus" was to deceptively promote patented marking menus instead, and that all the FUD they spread has prevented companies like Adobe from using them in their products.
There is some difference, but it's not anything like that video misrepresents. The Wikipedia page on pie menus gets it right: "A marking menu is a variant of this technique that makes the menu less sensitive to variance in gesture size." And "a mouse click could be used to trigger an item or submenu". Nowhere does it mention triggering submenus without mouse clicks.
Parroted FUD> Pie menus are infinitely less powerful than Marking Menus, and absolutely not the same thing. Where Marking Menus shine is in submenus, which I find to be cumbersome in regular pie menus (which is why my favourite implementations of pie menus eg. in Modo is without submenus).
I wrote a detailed reply to that parroted FUD, including copies the email between Gordon and I from December 1990, proving that I meticulously explained to Gordon how pie menus, submenus, mouse ahead, and tracking all worked together, more than a decade before he made that video, so he certainly did known better, and his video was purposefully deceptive, and his patent and FUD has held back progress and hurt users.
(You can see the comment and read my full reply and all the email from 1990 explaining pie menus to Gordon at the end of the link above.)
The Fitt's Law benefits decrease with the distance you have to travel to reach the bar, though. It always takes me several gestures on mouse or trackpad to reach the top bar on modern display resolutions. Better to just have good accelerator keys (not that any of the major desktops seem to do that in a discoverable way).
> It always takes me several gestures on mouse or trackpad to reach the top bar on modern display resolutions
First, resolution should have nothing to do with this. Second, even on the largest 4K display that I have, and assuming that you have a mouse curve with acceleration, I only have to thrust the mouse for about 1-2cm of physical distance in order to reach the opposite edge of the screen.
If you are moving the mouse ultra slowly to the edge, then yes, it may take a while to reach it. But the entire point of Fitt's law is that you just thrust the mouse towards that edge as fast as you can since you're not going to overtake it.
Some people do not use acceleration (but one can argue that they are in a minority). I was pro acceleration until I bought a steelseries pad with an aluminum legs mouse. A home tech that never migrated to my workplace so quickly.
Fitt's Law says ease of use is a function of size and travel distance. Items on the edge of the screen have "infinite" size, so they're "infinitely" usable (according to fitts law)
Some might say that swiping across a screen edge would be an application of Fitt's Law.
These gestures are used extensively on Windows on tablets, and has been on WebOS and Blackberry 10. (Not that Windows' laggy implementation isn't more infuriating than useful IMHO ...)
It's about target size and distance, not screen edge limits on motion, or relative motion -vs- absolute position input devices.
The "infinite height menu bar" design uses screen edges to make the target size effectively "infinite", but it doesn't minimize the distance, and doesn't exploit any direction but "up".
FWIW, Windows does let you place the task bar on any edge of the screen though, but you still can't have four task bars, one on each screen edge!
The screen edge limits of mouse movement only apply to input devices like mice and track pads (motion sensitive), but not touch screens (position sensitive), because touch screens are position sensitive input devices, and your finger position is directly tied to the cursor position, so you can't move the cursor without moving your finger or vice-verse.
This is related to the "nulling problem" that Bill Buxton wrote about in his taxonomy of input devices, "Lexical and Pragmatic Considerations of Input Structures":
>One consequence of the second philosophy is that the same transducer must be made to control different functions, or parameters, at different times. This context switching introduces something known as the nulling problem. The point which we are going to make is that this problem can be completely avoided if the transducer in question is motion rather than position sensitive. Let us see why.
>Imagine that you have a sliding potentiometer which controls parameter A. Both the potentiometer and the parameter are at their minimum values. You then raise A to its maximum value by pushing up the position of the potentiometer's handle. You now want to change the value of parameter B. Before you can do so using the same potentiometer, the handle of the potentiometer must be repositioned to a position corresponding to the current value of parameter B. The necessity of having to perform this normalizing function is the nulling problem.
>Contrast the difficulty of performing the above interaction using a position-sensitive device with the ease of doing so using one which senses motion. If a thumb-wheel or a treadmill-like device was used, the moment that the transducer is connected to the parameter it can be used to "push" the value up or "pull" it down. Furthermore, the same transducer can be used to simultaneously change the value of a group of parameters, all of whose instantaneous values are different.
>Relative vs Absolute Controllers:
One of the most important characteristics of input devices is whether they sense absolute or
relative values. This has a very strong effect on the nature of the dialogues that the system can
support with any degree of fluency. As we have seen, a mouse cannot be used to digitize map
coordinates, or trace a drawing because it does not sense absolute position. Another example
taken from process control is discussed in the reading by Buxton (1986a). This is the case of
what is known as the nulling problem which is introduced when absolute transducers are used in
designs where one controller is used for different tasks at different times.
Most people around world use small laptop screens, where having a top bar bad from fitt's law. Having infinite large window buttons (that are used often) on the top right corner is great and window menu/start button at bottom left corner.
The rich/poor divide does indeed exist and has always existed, but that floats separately to the poverty divide. changing one does not change the other. Imagine an ancient communist utopia where all the bread is split evenly - and the 20% who aren't making bread spread their more interesting items across the population equally. Maybe you'll get some nice pottery.
By any modern measure of poverty, you have created a utopia of poverty, everyone would be below the modern poverty line.
The rich of ancient times had to scrape their wealth from the labour of _so many_ people.
Meanwhile, if we tried applying the ancient poverty line to a modern rich society, we'd have trouble defining it clearly, even homeless people might sit above it, it barely makes sense.
There's no doubt they help the poor a _little bit_ .
The question is whether there exist alternative solutions that help the poor a lot more by not wasting so much helping the rich (or especially in this case - those buying and selling fossil fuels, industries we'd prefer to be slowing down rather than encouraging with tax breaks).
Eg if the same money "spent" on this tax break had instead been a invested in raising the minimum income tax ceiling, benefits, and pensions (by equivalent amounts), it would presumably have gotten more money into the pockets of those most hurt by fuel inflation, which was the claimed goal of the change, but done less to benefit the fossil fuel industry and those who don't really need tax breaks.
Or to go more extreme you could add an untaxed fuel ration (there is a shortage after all), paid for by higher taxes on extreme usage, thus reducing overuse and increasing affordability at the same time. But that's practically communism, so can't possibly be done (to be fair, it'd be very expensive to implement, so not sure it's a serious suggestion).
To be fair, this is exactly what the TFA suggests:
> Policymakers have mostly responded to the shock with broad-based price-suppressing measures, including subsidies, tax reductions, and price controls.
> Going forward, the policy emphasis should shift rapidly towards allowing price signals to operate more freely and providing income relief to the vulnerable.
It would be a lot simpler if anti-competitive behaviour only encompassed clearly immoral and unfair behaviour, but it tends to be more nuanced than that. There's no simple rule and a court would probably have to spend a long time concidering such a case.
The essential facilities doctrine might apply (the question isn't whether competitors are literally forbidden from the latest chips, it's whether they're "practically or reasonably" unable to access the latest chips, and all evidence suggests that's currently the case). Possibly it's the competitors own fault for refusing to pay the entirely reasonable price TSMC is insisting on, and perhaps Apple's contract with TSMC actually has carefully written clauses to ensure others could practically gain access without having to spend enormous amounts of money, but which nobody has been willing to use for their own reasons, but it certainly seems possible that the contracts effectively ensure Apple have sole access to the latest hardware, which would line up neatly with the apparent scenario we're in - that Apple currently has sole access to the latest hardware.
That's not to say it's a slam dunk case either. I wouldn't care to guess either way.
At some point it feels like trying to drain the ocean with a cup. Maybe we just need to accept that anyone who really wants to fingerprint you _can_ fingerprint you unless you use a specialist browser.
At that point the solution is fairly obvious, make it legally difficult to use unique fingerprinting and move on (ie stuff like gdpr). People will still do it, but they'll have to balance it with not falling foul of the law and wont be able to abuse it too much.
We wont stop real world facial recognition by all trying to make our faces more similar either, we have to accept it's generally possible to do, but discourage the actual doing of it rather than trying to make it impossible.
(note in both cases, actually preventing it when you have a reason to is totally possible and valid, via specialist browser modes and physical masks respectively)