Hacker News new | past | comments | ask | show | jobs | submit login
The Impending Doom of Expiring Root CAs and Legacy Clients (scotthelme.co.uk)
234 points by fabian2k on June 8, 2020 | hide | past | favorite | 200 comments



I begrudgingly bought a 'Smart TV' just because I wanted a 4K monitor. But I use it as a dumb monitor and watch app-based content through an Apple TV. At least I'm reasonably sure the Apple TV will be updated for 5-10 years, and is a lot more disposable than a giant TV.

My last TV worked great for about 12 years. No way Panasonic would've kept supporting it that long.

Honestly, the thing I hate most about smart TVs, especially budget-conscious ones like the one I bought, is they have terribly slow processors so just to turn on the TV takes like 12 seconds. My old TV had a picture up within 3 seconds of turning it on.

Does any manufacturer make a good TV, or are they all smart now?


Someone on HN recently floated the idea of a company that would make quality, non-smart electronics and appliances that would also be easily serviceable. I wish this existed and would pay significantly more for products like this.

To address your question, I think they’re all “smart” now. I ended up getting a low-end Samsung 4K a few months ago and it’s been good so far. It starts up very quickly, maybe 1-2 seconds. We use it exclusively with an AppleTV because of privacy / advertising concerns that have been raised. I’m just waiting for the day when all TVs require a working internet connection and we have to start hacking their hardware and software to protect our data.


Specifically for TVs, you can get a new display driver board for whatever panel is in there. Eats HDMI/DP, spits out eDP or VBO or whatever the panel format is. Often these are pretty basic, with a minimal OSD pasted in by the Shenzhen seller who configures the thing to your order.

Personally I'm trying to get my hands on the SDK for the software that runs in the driver chip (does the OSD and the scaling and everything else), because I think there's a lot of potential there. But boy it's like pulling teeth.

Similarly, the Exploiteers (née GTVHacker) have done quite a bit of reverse-engineering of the stock boards, so you can just fix the broken-ass firmware they ship with.

Ultimately I'm picturing something like the WRT54G, a single modem that the community coalesces around, to the point that a decade later, third-party firmwares have become such a force that manufacturers build hardware specifically to run it.


> Specifically for TVs, you can get a new display driver board for whatever panel is in there.

Is this something that is practical and commonly done, or only theoretically possible?

I have an older model Sony 4K TV (pre-Android) that is so slow now after its recent firmware updates that it responds to button presses only after 2-3 seconds. It's unbearable!

I'd love to be able to just pop in a new driver board and refresh the TV, but I've never heard of anyone doing this kind of thing before.


Get the part number of the panel (some reviews will mention this, if you don't want to open the case), and search for it on Panelook and Aliexpress.

Right now I'm working on a project with some M315DJJ-K30 and MV238QUM-N20 panels, but parts are still in shipping so I don't have anything to show for it yet.


> you can get a new display driver board for whatever panel is in there. Eats HDMI/DP, spits out eDP or VBO or whatever the panel format is

Do these boards support HDCP?


HDCP strippers are a thing now, so this shouldn’t be a concern.


For 4K? And are they legal?


Nice. Do any of those support 100+Hz frame rate or Freesync?


Any pointers to forums/projects or sellers of these? Sounds like a bit of a rabbit hole, tbh :-)


These guys are doing it somewhat commercially: http://zisworks.com/

I've stumbled into a pile of panels and I have several orders of different driver boards on their way to me from Shenzhen, but nothing in-hand to say if they work yet.


I found this for the Realtek rtd2556 chip which there are plenty of boards for on ebay when you search for edp+hdmi+converter: https://github.com/ghent360/RTD-2660-Programmer


Never add your WiFi credentials you your TV. It doesn’t let you remove them so you’re only option is to create a “dummy” network (ex: phone as a hotspot) and switch to it.

Also, watch out for clueless^helpful friends and relatives who try to “fix” your TV’s WiFi.


This is why you segment out your WiFi, I have 4 SSID's

1. Normal Traffic

2. Guest

3. Security Devices (Camera's, and other Home Automation)

4. Media Devices (Roku, FireTV, Smart TV's, etc)

media devices only have access to my home Plex server and the internet nothing else


Yeah, VLANs work for a TV that is only trying to connect to your network, but doesn't deal with the case of a TV that will connect to any open SSID (a case that at least one other commenter in this thread claims is already occurring). It wouldn't be difficult to have the TV check different open networks if it's not able to reach a server due to a firewall config on the home network. For anyone living in a relatively dense city, this could be a problem.


I just have a GLiNet travel router I picked up for $20 that is open and not connected to anything. Any "smart" devices get connected to that so they can talk to each other but not the internet.


IIUC, you're saying there's no obvious way to make the TV stop using the Wifi network once you tell it the SSID + password.

Couldn't you just change the wifi password?


It will continue trying to connect to your SSID. Failing of course, but it will still pollute your channel with garbage packets.


I really wish I could just remove the wifi chip. I'm tempted to take the rear panel off just to see if it's reasonably accessible.


Or just snip the antenna


<fantasy> Or just keep hitting company officers with a tire iron until they fix it. </fantasy>


Put it in a faraday cage.


(ot: should that be clueless^Whelpful, or is there some new convention I'm not aware of? ^ still means "ctrl" right?)


It's an old Unix convention: ^W (Ctrl-w) would remove a word using the Emacs keybindings. So in place of striking out a word, you'd put ^W and the replacement, as you would when typing it out ("clueless...err, no, wait...ctrl-W...HELPFUL").


Gp missed "w".



If true, that would be logical and fair.


It’s not. I’m not that clever.


Not quite, as i doubt exclusive or is meant here.


^ probably means super-script format in this context.

I'd probably say ~helpful~clueless instead though


I had this problem with a Vizio 40" 4K TV once, and was able to solve it using the factory reset function in the menu.


> It doesn’t let you remove them

What about factory reset?


My wife connected it to our wifi the other day ... I almost lost it.


A factory reset should fix that TV right up.


Also be aware that some smart TV's have been found to just connect to any open wifi, so even if you don't enter one and think your tv isn't spying, you could be wrong.


> I’m just waiting for the day when all TVs require a working internet connection and we have to start hacking their hardware and software to protect our data.

These days won't come. What will happen instead is TVs will start coming with embedded SIM cards for cellular data connection (prepaid or otherwise negotiated between TV vendors and telcos), and then we won't have any choice in the matter as consumers, short of opening the box up and snipping the antenna (and losing warranty in the process).


The cheaper route they'll go through first is connecting to any open wifi network (many do this now), refusing to work without a connection (some do this), and pulling a connection over HDMI (will be more common with future devices; presumably next gen consoles).


> These days won't come.

These days are already here. My TCL bootloops if it doesn't have an internet connection to download ads from, and if you call support, they'll happily walk you through getting it connected to the network.


What happens if you tell them that you don’t have an internet connection?


I meant (almost) all TVs, as GP, not a few of them.


In the US that won't affect your warranty for SIM-unrelated issues. And most other rich countries have stricter consumer protection laws.


> I’m just waiting for the day when all TVs require a working internet connection

We're not there yet, but we're on our way. My Philips Ambilight TV pops up a warning every few weeks to remind me that I should connect the TV to the network to benefit from all the smart apps it has bundled. Immensely frustrating.


With ethernet over HDMI being a thing, I've heard reports of TVs tunneling out through a media box without asking first.


That would require 2 Devices supporting Ethernet-over-HDMI, and one of needs to have routing/bridging enabled.

I have -up2now- not seen 1 device, where the combination of HW/SW was capable of offering ethernet to the Device OS.

AFAIK, no (mainstream) PC/Embedded/Mobile graphics chip/firmware/driver combination supports this. Please do indicate a counter-example, if you know of one.

I consider that unlikely to change too, since Ethernet is limited to 100Mbit over HDMI. Gigabit (Wifi) ate that lunch.

Outside of "Device-Collections" from e.g. B&O, that might use that comm-channel to have a "guaranteed communication backend" available, I see no real (commercially) interesting reason for implement it.

Having said that, I'd be very happy, if GFX-card/TV vendors started implementing the optional parts of the HDMI spec, like CEC,ARC, and Ethernet. I'd find use for it.


Ethernet-over-HDMI is so poorly supported in consumer devices that in HDMI 2.1 they dropped in completely and reused the pins for eARC audio.


My TCL bootloops if it doesn't have an internet connection. It also shows ads on its screensaver. Coincidence? Dunno, but it's sure making TCL money and making me miserable.


Can you factory reset it? I have a TCL Roku TV which I've never connected to the Internet and it shows a pretty standard 'logo bouncing around the edges of the screen' screensaver.


I sure can. I can even select a screensaver other than the "city with ads" one. Then it starts to bootloop until I reset it to defaults and connect it to the internet. Yes, I need to do both, or it starts bootlooping again. Support is all too happy to walk me through the process.


I believe a lot of television manufacturers already offer the equivalent of that as "commercial TVs" or "digital signage" they are marketed toward businesses. I think they're typically more expensive because they don't collect and sell data to subsidize cost, but that might be what you're looking for. Here's an example of a Samsung digital signage TV https://www.samsung.com/us/business/products/displays/4k-uhd...


They are more expensive because they generally have higher quality components and are not made to be disposable, and commercial applications have a higher price tolerance than consumers.

Despite the common trope, "selling data" doesn't lower the price of consumer devices. You only see price-less-than-cost plays when companies are trying to buy market share (think Alexa, Google Home, early Kindle, etc).


Umm, no - this is not just a 'common trope'. The CTO of Vizio has stated in multiple interviews that removing the data collection capabilities of their TVs would increase the prices. Direct quote:

"The greater strategy is I really don't need to make money off of the TV. I need to cover my cost."

and

"It's not just about data collection. It's about post-purchase monetization of the TV."

[1] https://www.theverge.com/2019/1/7/18172397/airplay-2-homekit...

[2]https://www.businessinsider.com/smart-tv-data-collection-adv...


Right, but the value of the data pulled from a single TV is probably around $10/yr (how could it be higher?). Over 10 years that’s $100 - sure - but do people keep Vizio TVs around that long?


From what I've found it looks like you're right. The commercial TVs come with different port sets, and are built to be more durable, brighter, and allow cooling in more orientations. https://image-us.samsung.com/SamsungUS/b2b/resource/2016/07/...


> Someone on HN recently floated the idea of a company that would make quality, non-smart electronics and appliances that would also be easily serviceable. I wish this existed and would pay significantly more for products like this.

You know, I've wanted something like this as well for a while. I know next to nothing about startups or companies, but I wonder how hard it would be to create a hardware startup around this. On one hand, you arent really designing anything new, you're more or less creating a version of an existing product, with existing technologies, except removing undesirable parts. On the other hand, I have mo idea how someone could get started without previous experience in TV design / manufacturing.


I've thought about this, too. I don't know if you could sustain a company on this idea. The profit margins on consumer tech are miniscule, and if you do it well, you will have very few repeat customers. But I think it could do well as a one-off Kickstarter thing, if you have the knowledge & reputation to back up your claims, or in small, infrequent batches. Buy a bunch of flat panel displays, slap 10 HDMI ports on the back; an optical audio out; and TV tuner; write up some extremely fast and barebones software to switch inputs. Market it to gamers and tech nerds like us. Hell, open-source the software and make it easily upgradable. I dunno. I think it could work.


You might do better with a modular system. A single port on the display, a separate box that handles rapid switching and HDCP. HDMI switch boxes exist, but they seem to forward HDCP responsibility to the display, so switching is still slow. A broadcast quality switch/crossfade box would rock.


Doubt it would work. Hardware is commoditized and low margin. That's why TVs are embedding internet advertising, to make fat stacks from their otherwise barely profitable products.


Not all hardware is commoditized.

I do not understand why so many other markets have a number of luxury/craftsman brands and in computers we only have Apple. Cars have tons. Even woodworking has 2 built-to-last brands and 2 others that like to flirt with that moniker.

You will have to pay though, if you want the company to stick around. If you buy something that lasts 3x as long then they get less revenue from you, and the smaller volume means every sale includes paying off the initial R & D.

I recall when Lian Li first came onto the scene and the notch-up in ergonomics of computer cases that followed. Maybe we need something like that for consumer electronics.

The problem is that people like small and clean, which is where Apple excels (to the exclusion of accessibility), so I wonder what bomb-proof consumer electronics built for modification would look like.


> I wonder what bomb-proof consumer electronics built for modification would look like.

Starfleet technology. Sleek and sturdy, but opens up when pressed the right way, and generally can be completely disassembled with ease, have pieces of it replaced, and then put back together.

It's fictional, but it shows the way.


Wouldn’t that work in _favor_ of the startup idea?

If the hardware is a commodity and the market is for a bare-bones, self-serviceable “just a TV”, then the startup should have significantly lower engineering costs and benefit from the low component costs. Given that people on this thread are stating that they would pay a premium for such a product makes it seem even more viable.

Maybe other people have different “serviceability” definitions, but I once replaced the main board of a broken Vizio smart TV —- there were only two components —- a power board and the main electronics board. The hardest part was tracking down a replacement board, but $35 and a few screws later and I had a working TV again. That met my definition of self-service.


* They're not going to buy it for the most part. They just like saying "I would buy it". Ultimately, price will determine almost all. You can see this when the already committed market for open source phones just didn't materialize when the device arrived. Turns out when people say "I want an OSS phone" they mean "I want a phone that is exactly like this top of the line phone but the software should all be OSS and it should not be a penny more expensive". In this case, you can judge by the fact that these already exist and when people say "significantly more" they mean "5% more" because they don't actually buy the thing.

* You can't compete on price because the TV guys are selling access to the data they collect for ad targeting / attribution.

But don't let anyone stop you from doing it. None of us thinks this is worthwhile, so you might be on to a Thiel Truth.


Maybe. There can be a problem with the size of that premium.

Suppose end consumers think fixing this is worth $50 but not worth $100. If the outfit selling a "SmartTV" gets $80 lifetime value from the "SmartTV" features like forced advertising and margins are slim you probably find you can't offer this "dumb TV" for $50 extra without losing money, and you have no customers and go out of business.

On the other hand, it could go pretty well. We know advertisers see people who buy premium products (which such a "dumb TV" would be since it costs extra) as more valuable in a lot of categories, so maybe you charge $50 extra for the dumb TV and - as it takes off - that undercuts the advertising income of the existing products, so they have to raise prices to remain profitable and that actually improves your affordability compared to the market. Or else they make advertising even more intrusive and pervasive to keep the revenue up, and it makes your $50 premium appear a bargain by comparison for those sick of advertising.


What people on HN say they would pay for, and what people actually pay for, are very different.


There's both a time lag and the influence of manipulative product offerings. "Revealed preference" is just code for exploitation of vice.


I'd think that a fancy looking website for a group-buy, where you manage a relationship with a manufacturer to build to your spec & you build (or get them to build) the software behind it would give it what you want. Kind of like https://www.weargustin.com/ or a Massdrop?


In my perfect world, controller cards have a standardized physical interface, which SBC makers can target as a repair/customization option.

In practice I suspect many of these systems barely function, each is a special snowflake, and that any bad input from controller could result in fire or water damage to your house.

So maybe TVs and fridges are a good place to start and we should give washing machines a while to figure it out.


> I wish this existed and would pay significantly more for products like this.

But the average person wouldn't, and "fickle hacker news user" is not a wide open market.


Maybe some day we'll invent the technology to address small markets without being told we're doing something wrong.


> ... I wanted a 4K monitor.

To all the folks reading at home... I would remind everyone that a TV is not a monitor.

I got a relatively inexpensive LG 43in TV to use as a desktop monitor. I've had issues. Beyond the usual "make sure your graphics card actually supports 4K" and such, you also need to take some time to dig through the TV settings when you are using it on a PC.

In particular, you definitely want to turn on "game mode" or whatever it will be called. This turns off the internal image processing that can add hundreds of milliseconds of lag to the display output. You'll also need to find the settings to turn off the overscan. Many TVs will, by default, zoom into the center of the image, cutting off the edges a little.

I've mostly gotten over the other issues with using a TV as a monitor (color settings and such), but if you want to save yourself some hassle, just get an actual monitor.


As a side note, if you want to watch 4K video, keep in mind that as of right now, _the_ main draw of 4K is HDR (most 4K video out there is upscaled), and most cheaper 4K screens (both TVs and monitors) don't support HDR. Or if they claim they do, they only "support" HDR in that the monitor does tonemapping in hardware but can't render HDR colors.

The technology is still new enough that getting the cheapest 4K screen you can find is still a really bad deal.


I really don't understand 4K 10bit HDR video that's still only 29 FPS. Why is smooth motion not a focus at all?


For my work desktop, the main thing was to have more terminal windows open. Video was a secondary concern.


I'm calling what I want for my living room a 'monitor'. I want a TV that's a monitor, basically, not the other way around.

For my computer, I use a 32" LG 4K monitor, and it's great. No 'smart' features found there.

But I can't find a 4K monitor in the 42-50" range that has multiple HDMI inputs that is not incredibly pricey (e.g. broadcast/commercial realm) I could use in my living room.


I've been having my eye on an LG 43UD79B [0] for a couple of months, that I would use both as a monitor and as a TV.

It has 4 HDMI inputs and a USB-C with display port. It even has a remote.

It's an IPS panel though. From what I hear not everyone loves those.

Amazon has it in France for around €600.

[0]https://www.lg.com/us/monitors/lg-43UD79-B-4k-uhd-led-monito...


Curious why multiple HDMI inputs matter, given a decent A/V receiver (a few hundred bucks) can handle it for you (along with waay better options for audio)?


AVRs are a huge hassle to deal with, and overkill when you only have 2 or 3 devices to switch between and/or don’t have many audio-only devices. AVRs made more sense during the days when we’d have CD players, tape decks, VCRs and DVD player, vinyl record players, cable/satellite STBs, and a game console or two all hooked up. Now it’s just an STB (if you aren’t a cord-cutter), a Roku/AppleTV/Chromecast/FireTV, and maybe a Blu-ray player (or BD-capable games console, which also doubles as a Roku/AppleTV/etc).


It seems our experiences and priorities are quite different. I've never found any "hassle" in using an AVR for its intended, dedicated purpose: centrally managing various audio and video inputs and outputs and driving surround speakers & subwoofer. Inputs include Cable box/DVR, PS4, AppleTV, and (sometimes) Nintendo Switch. Audio out leverages our home theater surround system, and video out is a single HDMI into the tv. I have no interest in using any "smart" tv features, tv speakers, nor attempts at playing the role of a dedicated receiver. Unless you're a hardcore gamer measuring latency, IMHO use of an AVR is simply the right tool for the job. Nothing overkill about it. Put the "smarts" in a decent universal remote, keep the tv as dumb (literally; it's for video only) as possible, ensure everything sounds great, and never fiddle with the cables ones it's set up. YMMV, but this has served me very well for over 15 years.


I have the inverse of your setup. PS4, Apple TV, and Blu Ray Player use the 3 HDMI inputs on the TV, with an optical out audio cable to an input on my Sony receiver.

My receiver is older and doesn’t have HDMI in, but this solution works well with the TV I have. When a device is turned on, the TV wakes up and switches to that input. If the PS4 is on and I turn on the Apple TV, it will auto-switch. I never need to touch the TV remote at all since the devices go to sleep automatically, which is nice. Apple TV remote handles the receiver volume, and the audio input never changes on the receiver, so I don’t need the TV or receiver remotes at all.


Yep, different strokes for different folks. :) I've never had a TV w decent audio out (nor reasonable input sec switching), rather started w focus on amazing sound (Hsu Research Ventriloquist 7.1 w down-firing sub), and eventually the receiver's role was expanded to encompass video. Whatever works for you!


Without an AV Receiver, how would you drive a set of speakers?


That's what HDMI ARC is for - the TV can be connected to a normal speaker amplifier (so the TV acts as the receiver, basically).

I'm wary of using HDMI AVRs today because they add input lag.


I use powered studio monitors. They drive themselves.


TVs can make pretty serviceable monitors. I was using a 4K Samsung as a monitor for quite some time and it worked fine, but the "smart TV" features were definitely annoying.


Why not just get a 43" monitor instead of trying to misuse a TV as a monitor?


Because the 43" TV is $200 while an equivalent computer display is double or triple that.


YGWYPF? The price difference can sometimes just be the ads, bundled apps, and other "smart" features designed to put a couple bucks back into the hands of the manufacturer selling that TV at low or now margin.


> I wanted a 4K monitor

You might want something like a Philips BDM4350UC then [1] - brought from the 'large monitor' section of your favourite IT retailer. No smart features whatsoever (and no tuner and no remote).

Unfortunately, they only go up to 43-inch - large for a monitor, but not exactly Frank's 2000-inch TV.

[1] https://www.philips.co.uk/c-p/BDM4350UC_00/brilliance-4k-ult...


I think you might actually want to search for products that are positioned as "monitors". Some of these are pretty reasonably priced and should offer what you're looking for.

For example, I am using an LG LG43UD79 currently. This is a 42.5" IPS 60hz 4k "monitor" at a decent price (I think I paid $399 at costco.) it has HDMI and USB-C inputs, but no smart functionality at all, no tv tuner, etc. The color and picture are pretty good after calibration. I normally use the monitor for programming, web browsing, chatting, etc but also do some very light gaming and it is fine for that.

I recommend this monitor and am hopeful that there are some more modern equivalents of it when I inevitably upgrade in a few years as I love the large monitors for getting work done.


Sony dropped playstation support for my 4K monitor in an irreversible software update, forcing me to buy a smart TV :/

Also: the new smart TV bootloops if it can't connect to the internet for more than a couple weeks at a time (it wants to refresh its screensaver ads).

You can run, but you can't hide.


I like Amazon's model for this on the Kindle: Kindles are subsidized by ads, so you can buy what is effectively a REALLY cheap tablet from a well-known company if that's most important to you. OR, if you prefer, you can go pay a one-time fee an eliminate ads from the OS.


Yeah, I would be less upset about the ad situation if they had made it clear up front and given me a choice like Amazon does with the Kindle.

That said, when companies price the ad-free premium at 1000x the expected ad revenue, it gets aggravating again.


What's the advantage of this? Surely they're better off pricing it at 2x the expected ad revenue, since then they can expect to get twice as much now as they otherwise would get eventually? Indeed, even if it were 1.01x the expected ad revenue they've won with this upsell.


A willingness to pay signals that you A. have money and B. care about quality, which makes you a ripe target for price discrimination, which is why "ad-free" premiums often exceed the ad revenue by an enormous factor.


Charging 1.01x the expected ad revenue wouldn't cover the cost of implementing the no-ad option. Even 2x might be in break-even territory when you include extra support costs and the like (one person on the phone because they bought no-ads and got ads due to a mishap somewhere wipes out the profit from a lot of no-ad sales).

There's also the funny issue where people who can afford to pay extra for the no-ads option are exactly the people that advertisers want to show ads to. If the no-ads option is $5, then you're left selling ads with a target market of people who can't afford or don't want to spend $5, and that ends up driving down your ad revenue. A large gap between the expected ad revenue and the price of the no-ad option helps mitigate this effect.


Isn't this advertising pricing paradox proof of the exploitative nature of advertising?


Pretty much. In Australia Pay TV has more ads in it per hour than free to air TV.


Or you can buy the cheap one with ads and keep it on aircraft mode, longer battery life, no ads. (Download content from PC via USB)


True - but the fact that's even possible still makes me happier than the TV my comment was replying to.


> Sony dropped playstation support for my 4K monitor in an irreversible software update

What kind of support does it need? Isn't it just an HDMI cable?


Nope. It was a specific HDMI codec that was common to early 4k TVs and monitors, and it worked great until they decided it shouldn't anymore. Something about HDMI 1.4 vs HDMI 2.0.


Sounds like something a $20ish active adapter should be able to fix. The box shouldn't care what's on the other end of the cable as long as whatever it is speaks the right version of the protocol.


The box would have to decrypt HDCP, swizzle one codec into another, and re-encrypt. I spent about $50 on dongles and the better part of a weekend pursuing that angle to no avail. Sony didn't make the announcement ahead of time, so nobody had an FPGA module ready. Of course, a cynic would say that was the whole point.

I wasn't alone in having this problem -- the support forums were overflowing with outraged threads about this. Support would give non-answers, wait for someone to say something objectionable, and then use it as an excuse to lock and delete the thread. Rinse, repeat. They had it down to a science.


Active adapters of the sort you might need probably start around $70 and go up depending on the ports required. To get a 4k hdmi 2.0 to dp adapter would cost like $120+ for example.


I'm sure someone eventually made one to address the market need, and $120 sounds like a fair price given the complexity. It wasn't available when I needed it, and it doesn't change the fact that Sony just up and dropped support for old displays.


My guess is the TV must support DRM


That's only for Bluray playback, for gaming the PS4 doesn't care about HDCP.


I'm pretty sure that is not correct. I've had issues with games on the PS4 not wanting to start, and I know the PS3 required HDCP for certain games as well.


HDCP?


If you open up a smart TV, isn't the "smart" part of it basically an embedded ARM cpu SBC, with a somewhat standard connection to the LCD cable? If so, how hard would it be to rip out the smarts and put in a passive HDMI to LCD converter board?


This exact strategy is mentioned elsewhere, and it appears there are people trying to reverse engineer the base firmwares as well.


That last part is my nightmare ... which company is it?


TCL. The screens are cheap, but they get you with the ads.


You have to shop for TVs by sorting from high to low pricewise... the cheap TVs are subsidized. For a project a few years ago, I bought a few Samsung commercial signage TVs for a customer whose security requirements prohibited anything like a "smart" TV.

At the time they were dumb panels, not sure if that is still true today. You had to understand what you were buying, as some of these displays are designed to show static content.


They are all smart, because that is actually cheaper. The manufacturers earn money on selling what your view, ads and selling subscriptions. And they make more on this than it costs to make the TV "smart". There was a Vergecast with the Vizio CEO and he said this is the new business model. Notice that both Samsung and Sony are testing with ads on the homescreen, if those two are willing to do this as premium brands you can be sure everyone else will try as well.


The cost to make the TV smart is almost nothing (the software would cost a bit, but from every smart TV I've seen not as much as it should because it's always pretty crappy and janky). The additional BOM cost would be a few dollars at the quantity they make these things in for an ARM SoC and some RAM.

I really think he's massively exaggerating. His statements might be more rationalisation because people hate it, but they just want to squeeze more money out of people. But I can't believe that showing some ads and fingerprinting content would net them more than $100 over the life of an average TV. I'm sure there are plenty of people who would pay more than that for a no-ads, no network connection required version.


What we did for our old tv, was to get a chromecast after 3-4 years. This might be a cheaper and more environmentally friendly approach than yours, as we then buy one «smart-device», but you might end up with 2, over 12 years. But then some (all?) smart tv offerings have issues with terms and conditions, privacy, etc which may make them undesirable for you.


I never hooked up the smart tv to my wireless network. I have heard too many stories about data getting sent back to the manufacturers to sell your watching habits, etc. Instead, I use a Roku, that I know gets updates (and yes, they also sell your watching habits, but so far, nextdns.io's lists block the DNS that roku uses to do this.)


I have a dummy hotspot with no access to anything whatsoever, and which I allow my TV to connect to. Apart from exposing an undocumented API (with code execution capabilities), and allowing logging in as root over Telnet (without a password), it also tries to fetch software updates over plain HTTP. It's a bloody nightmare.

On the plus side, it was possible to disable many of the smart features once I discovered the telnet capability. :D


Vulnerabilities are a massive trade off. I want my devices to be open to me, not to anyone. Seems like it should be straight forward to give you access to the device (like "scan this qrcode on the inside of this panel" and you get the private key). One can dream...


Yeah, pretty much every single "smart" device I've ever bought has had vulnerabilities that never got patched. It's ridiculous.

On the plus side, my 10 year-old laptop still finds a use as an access point/firewall for all those things. :D


Does anyone know of such hacking that someone has done and documented? I'd be interested in trying to telnet or connect to a Samsung TV that I have, or at least be able to sandbox it somehow.


Problem some do connect to any non secured WiFi they can find.


I purposely bought and expensive one that’s pure android tv where you can disable all the tracking bits


You bought an Android device to avoid tracking? lol

As long as you're using binaries that were compiled by Google I wouldn't trust them one bit.

If you use LineageOS on your TV you might be a little better off.


Android by Sony verses Samsung or webos? I’m happy with my choice.


Just remove the antenna? Won't be connecting very far after that!


That is terrifying.. good to know!


Pro-tip: you can usually un-accept privacy policies on Smart TVs and this reverts them to "dumb" TVs. This disables built-in apps and may make "software updates" unavailable but that's actually what you want anyway.


How would I do this?


It's buried in settings. On samsung it's in system settings.


I recently started looking for a new TV since my old “dumb” TV is starting to have image issues and they all seem to be “smart” these days. Seems like manufacturers have slashed prices by adding adware to the GUI of TV’s.

Needless to say, I’ve been incredibly frustrated with my search.


I can't speak for their latest and greatest but the LG TVs I've used running webOS (Oled and LCD) work fine without ever being connected to a network. I have a C7P panel I've connected via Ethernet a couple times to update the firmware and it has continued to work offline just fine.

They also support CEC so if the devices you're plugging into them support CEC you can just put the TV remove into a drawer and not worry about ever accidentally triggering the SMART features.


> I have a C7P panel I've connected via Ethernet a couple times to update the firmware and it has continued to work offline just fine.

You may want to be careful with this. On Amazon Kindles when you connect to WiFi it downloads ads and will keep showing those same ones forever if you never reconnect and let it fetch newer ads. Seems the TV doesn't do it yet, but it could be added in a firmware update.

Unrelated, what do you even need firmware updates for on what is effectively a dumb TV?


To fix some issues with a Roku Premiere Plus and switching refresh rates causing screen blanking because if those stupid Netflix Autoplay trailers.


In my LG OLED there's actually an option to turn off ads in the GUI. It is off by default, amazingly. I've done a GDPR data request and unless they are hiding some of what they collect (which I doubt) they don't really collect anything crazy. The netflix client does a lot of phoning home though but I have blocked everything I don't want to connect.


Luckily I bought an RCA 55" 4K TV for like 400 CAD, it's 100% dumb, HDMI and other inputs only. Got this two years ago it's probably not like this anymore.


I went with RokuTV. They have the best long term support track record for old hardware. 4K Roku hardware is powerful enough. You don't get wedded to a platform that will be abandoned in four years.

I have a Samsung smart TV for work tasks that has the most annoying behaviors built into it. It always insists on "identifying" an HDMI source which it can't do because I've never put it onto my network. Roku doesn't appear to do this kind of data exfiltration.


Roku may well be the least worst option, but it does exfiltrate data.

https://news.ycombinator.com/item?id=21657930

https://news.ycombinator.com/item?id=21614851


Search for public/info/commercial displays if you want dumb(ish) large monitors.

https://geizhals.eu/?cat=monplas


No experience (or affiliation) with it, but I’ve heard some good things about https://ironcast.tv/



I honestly do not know why TV Manufacturers have not make the "smart" function upgrade and modular

Buy a tv, then in 5 years when I need more functionality I simply upgrade the "smart module" for $50 instead of having to replace the entire TV

Of course they would then probably try to make a "as a service" subscription bullshit and it would be worse than we have now so never-mind I retract this comment


https://www.necdisplay.com/

At most they come with a QAM tuner. Some are a display only. None have smart TV spyware from my browsing their offerings.

Fixed a typo


My LG turns on very quickly, and actually has decent UI.

I’m guessing they have a decently powerful processor to power WebOS in there.


If you have a current Firefox that first diagram may not be correct for you. Let's tell a brief story about why.

Servers, as Scott explains, are supposed to present a "chain" of certificates, in practice it's one leaf and then just anything else that might be useful for clients to assemble a trust path. If you do this everything works.

But lots of servers are misconfigured and present only the leaf certificate, not least because a lot of older software makes configuring the chain needlessly confusing/ complicated and it often "seems" to work anyway in some client software.

Given just a leaf, and a set of roots it trusts, a client can't make a useful trust decision, there's a gap. One option in this circumstance is to just give up and just tell the client's user this isn't secure. That's a poor UX.

Another idea is called AIA Chasing, many popular web browsers do this. When it can't get from the presented certificates to a trusted root the browser inspects the Authority Information Access field of a certificate to find a URL, hopefully this URL points to an intermediate certificate that plugs the gap. A problem with AIA chasing is that it reveals information about your browsing to the CA who otherwise have no reason to know. This privacy concern led Mozilla to reject AIA Chasing.

Yet another idea is to cache intermediate certificates for some period of time, then use that cache to fill any "gap" when making a trust path for other servers you connect to. This causes hard-to-predict errors for users, maybe the funny cat video site you enjoy works fine... unless you try to visit it first thing after turning on the PC. It can potentially represent a privacy risk within a session.

So ultimately Mozilla's workaround in current Firefox is to give Firefox a fairly complete set of intermediates, just as if it had cached them already. Essentially that right hand "embedded in browser" part of the diagram extends left to the intermediates too.

It made that happen through two means: Firstly m.d.s.policy required all root CAs to tell it about all unconstrained intermediates they had ever created, which was itself a revealing process because there sure are a lot of (mostly older) garbage intermediates that clearly are untrustworthy and shouldn't exist. More recently Certificate Transparency means any intermediates still in use must be in the CT logs because otherwise the leaf certificates won't be accepted by those logs, so Mozilla can "just" fetch that data.


Interesting note about Firefox there.

I've certainly expirenced the missing "missing intermediate cert" problem before. It is a bit of a nightmare to debug.

1. Had issue, raise support ticket: TLS not working

2. Ticket closed as can't reproduce

3. Try myself again locally, also can't reproduce. Hmpf!

4. 2 months goes by..

5. Experience same issue. Debug more carefully locally, use openssl go get proof of missing intermediate

6. Raise support ticket with platform team, then try to convince them that just because most people are not affected, it still needs to be fixed!


You can often engage people, especially technical people who enjoy things being gameified - by showing them that Qualys gives them a lousy score for what they've done.

https://www.ssllabs.com/ssltest/

Also this lets you out-source the decisions about what's important versus what really doesn't matter to somebody else, and unless you've got (or can hire someone who has got) hours per week to read and digest work in this area that's likely going to mean better security in practice.


ssllabs is great and only works for publicly accessibly websites.

The challenge in my experience is to resolve the issue for internal sites, the thousands of internal tools and test domains from every department. None of the public tools can reach them.


If you like ssllabs, you will love https://www.hardenize.com

Much more intel, better performance and a better UI.

No affiliation, just a fan since day one.


Certificates suck. They are monumentally user unfriendly and complex. How many major internet properties have had outages driven by certificate renewal in the last several years? Half? I know that enterprises I work with seem to have some sort of work impacting certificate problem every month.

The ecosystem needs to become a lot more robust and user friendly in general. Unfortunately, I’m just a user of certificates not a cryptographer, so I’m not really qualified to design security critical aspects of the system, but here are some rough features that I’m looking for:

- certificate warning date: a time period defined like expiry date that indicates that clients should warn of impending expiration but still click though

- Make it easier for people to acquire and renew certs. Let’s encrypt is an amazing start here, but it’s not a universal solution yet

- Formalize a way to solve the key distribution problem described in the article. Again, I’m not a security expert, but perhaps embedding a “replaces” concept into a certificate would work.


While I empathise with your viewpoint, that the ecosystem is hard to work with, I'm afraid I don't think there are any quick solutions to this.

> certificate warning date

If this is user-visible then for most companies this would be nearly as bad as expiry, and if it's not user visible then it's not useful.

> Make it easier for people to acquire and renew certs

Let's Encrypt pretty much solves this problem. The places where it doesn't work are companies that lock down the use of that sort of software. I don't think there's an easy way around that.

> Formalize a way to solve the key distribution problem described in the article

This problem isn't just the problem described in the article, it's _THE_ Key Distribution Problem. It has a Wikipedia entry, it's a chapter in most security textbooks, it's a lecture in most cryptography modules at universities. It's a _hard problem_. The practice of including Root CA certs on devices is the best thing we've come up with yet that has the security and scalability trade-offs necessary.

If the "replaces" concept was viable in the general case I think we'd have likely implemented it, but I suspect it's too vulnerable, and likely only pushes back the problem.


I suspect we need standards/protocals defined at layers above the crypto-layer. For the certificate warning date, a yellow lock rather then giant red screens browsers show me now would be good, so at least in some cases users can do the notification. However, the real solution is standardized tooling that helps the organization detect the issue.

Let's encrypt is definitely in this vain, however I think it needs to be the point where software defaults to user friendly behaviors like auto-renewal unless explicitly told not to. This seems like it could be address in standards making process, but maybe that's the wrong venue.


Certificates don't have to be difficult. The major CDN and hosting providers can setup TLS automatically with no user action. CloudFlare rolled great command line tools to handle internal certificates and CA (see cfssl).

I've worked at startups with fully functional PKI. It's almost trivial to achieve, as long as you've got configuration management over all the servers (ansible, salt and assimilated).

On the other hand. I've worked at a large bank trying to make TLS work across the firm and it was a mess. There was no motivation to have any automation around certificates management and no control over server configurations. One simple issue for example, CA on linux are managed by the ca-certificates package, all it takes to keep TLS working is to upgrade that package every couple years "apt-get upgrade ca-certificates". Some servers haven't had upgrade since 2015. It wasn't particularly difficult to get 10k servers from 5 departments upgraded after handing them instructions to do so, but of course a few other departments won't act until things break (and sometimes still won't act in spite of active incidents).


To quote myself¹ from two years ago:

Any Internet-connected device is, in fact, a server, and must be seen and managed as one. This means strict control of installed services and, first and foremost, regular updates of all its software components (including firmware). If you acquire and install such a server which either can’t be updated or one which you know, realistically, won’t get any updates six months after installation, that’s asking to lose.

1. https://news.ycombinator.com/item?id=18019343


Smart TVs are the worst. Unrelated to TLS: I work for a video streaming company, and while almost all of desktop/mobile Chromium-based traffic is Chrome v80+ or at least v70+, the smart TVs have webviews based on Chromium v45~55 (most TVs) or ~v60 (cutting edge modern Samsungs), and some older ones (~2016) are even worse, based on a very outdated webkit which is more or less Safari 8.0 - basically hardly better than IE11.

But coming back to TLS again: even if the old devices have an up-to-date root store, they typically have an outdated TLS stack which does not support TLS 1.2 ¯\_(ツ)_/¯ and this one is probably even harder to update. And since most of the companies want to drop TLS 1.1- support on their servers soon, this should also doom those outdated devices apart from the root stores issues.


If this is the newspaper-whack-to-the-nose that gets people to stop using sly TVs as more than dumb monitors, then I think we'd all be better off for it.


Having expiring certificates/key rotation might be a net negative: if you keep the private key secure, there is no need to rotate it, and it avoids a lot of hassle. Also, if you have a revocation mechanism, then rotation doesn't add that much for keys that are only used for signing and not for encryption (like the CA keys).

Of course in some cases like domains it's necessary since the domain can be transferred, but this is solvable by DNSSEC+DANE.


It's a good point. At any given time, the key is either compromised or it's not. And you may or may not know either way.

If it's compromised, and you know, then it should be revoked -- if your only mechanism to revoke it is waiting some amount of time (several days, months, or even a decade) you have a pretty big problem.

If you don't have a way to detect if it's compromised, rotating a precaution almost makes sense -- but then the valid time should be very short. I'm not exactly sure how short, but definitely the decade or two used for CA certs is too long. Make it shorter than a year and automated renewal becomes necessary, but if you have that mechanism why not check for revocations?

The only other reason I can think of to expire non-compromised certificates is to force them to be regenerated to use newer signature algorithms, but even that's difficult to predict. We're currently using SHA-2 for PKI, but is that still going to be reasonable in 5, 10 years? Someone could figure out a potential weakness at any time, which would start the move to something else.


What we're talking about here are trust roots and one of the things people usually get wrong when they try to draw a diagram of how this works is they draw certificates as nodes and then just connect those nodes together with abstract lines.

Actually the correct mental model is a graph of public keys and the lines between them (joining two nodes directionally) are the certificates.

So although we traditionally handle the root trust set as a bunch of self-signed certificates, those certificates are largely unimportant, what's vital is the public keys baked inside them.

As a result what makes older roots obsolete is not a signature algorithm, but the type and size of key chosen when they were created, that key is in a very real sense the root.

This AddTrust root for example was 2048-bit RSA. You can use that size of RSA key today for your funny cat video site, no problem, but it's clearly an inadequate choice for a CA root. Fortunately roots like this are gradually expiring.

If I figure out how to build a machine that can break 2048-bit RSA keys for $10M per key it makes no sense to target your cat videos. But a CA root is an attractive target. So we'd like to have more margin for the roots not the same or less.

Some years ago Mozilla finally prohibited 1024-bit RSA keys in roots. Some of the oldest roots in the business were 1024-bit RSA, which today would not be considered acceptable even on your cat video site, but when those roots were created 1024-bit RSA seemed safe enough.

In 5-10 years you'd probably want as much as possible for roots to be the more compact elliptic curve public keys, maybe there will be some better (more secure) curves in use by then, maybe not.


I'd say sign the next root certificate with previous root key, then you can rotate root key automatically. Browser update works the same way: you download new browser with new root certificate authenticated by previous root key.


Most roots are required to be cross-signed today by all the major browser and OS vendors. What you have is a classic web of trust issue because it's the new key that has the signatures attached so you could take the new key/certificate at its word, but if the problem is the previous root expiring, how do you trust that the new root certificate was signed by the previous root before it expired? You can't trust the now expired root can you? If you restrict it to the subset of time where both roots are unexpired, you have the exact same problem as the intermediate game that the BBC has to play in the article.


Same as with the browser update: you receive it before the previous certificate expired.


Most of this article is about the pain caused by that we cannot assume devices will update in years. There's no window you can set for both certificates to be valid side-by-side that will be long enough to update every device.

Again, roots are already cross-signed and the article points out they still take 2+ years to pass other validity checks for root adoption. Even if they "virally" propagated in this manner after all those checks passed, there are plenty of devices that are only powered on once a year or less; there are a lot of devices whose support lifetimes for any upgrades at all are less than 2 years (that's a specific call in the article: we need security support lifetimes extended to decades at least, probably).

If they propagate "virally" you have a lot of questions of whether or not a device will even see that there is an updated certificate. Not every user or device visits a web page in a certificate chain to any given root routinely much less ever.

Comments to this article even point out that there is a semi-viral update process already in place in most browsers called AIA chasing, where certificates may point to URLs to look up their parent authorities, and Mozilla intentionally doesn't AIA chase because it's very definitely a privacy risk, even if you don't agree that it is a security risk (bad certificates sending you to bad URLs). The security risk is why the browsers that do AIA chasing only allow it for intermediate certificates and will not trust new roots found by AIA chasing.


If you aren’t rotating certificates regularly, then when you do need to rotate it, you won’t have a procedure to do so and clients might do things like assume they will never be rotated.


I got hit at work by the AddTrust issue too, quite a bit of a nightmare to chase down, especially Java keystores. And that was mostly because there exist clients who barf if they have a valid working chain but also another chain where one of the certs has expired!

For embedded devices - it's not just smart TVs but all IoT devices! - I think that the solution is a legal requirement: a condition of a device being able to be sold on the EU / US market must be the commit of the whole source tree and build chain including any and all key material for signing firmwares to the national library or other government-run secure escrow, to be released to the public once the manufacturer discontinues the product support. Similar to like you can't sell a product without adhering to the CE and electrical safety standards, manufacturers should not be able to sell products without having them certified for IT safety and e-waste/"planned obsolescence" issues!


TIL about Network Error Logging[1], which is having browsers send you reports that your DNS isn't resolving, your load balancer is timing out (HTTP / TCP errors in general), or your TLS certificate is invalid. Basically all types of issues that don't end up in your own application server logs.

Report URI should lead with this (it's not even mentioned on the front page)! It's super useful for every site, I can't think why anyone wouldn't want this. CSP monitoring on the other hand, which seems to be their focus, is a much harder sell, a pain to setup and maintain, and a bit questionable value in my opinion.

[1] https://report-uri.com/products/network_error_logging



Report URI is run by Scott and was featured in the article.

By definition you have to run such a service entirely separate from your own infrastructure: you should use a different CA than your other site(s), a different domain name for sure, preferably a different infrastructure host etc. It makes a lot of sense to use something like the Report URI service for it.


This looks incredibly useful! It'll also probably tell you who is using Pi-hole though.


That is why we need open hardware specs, open-source firmware and a right to reverse-engineer out of market devices. Like OpenWRT or LineageOS, they keep old devices usable for long and reduce electronic waste.


> If we take a look at similar data for iOS, it's a very different story.

This fits the prevailing narrative, and for all I know it's true, but Scott does not in fact illustrate it.

The "similar data" to a table of all Android API versions versus popularity turns out to be a chart of recent iOS versions in which everything that isn't current is just grouped as "Other" at 16.28%

Maybe if we ungrouped "Other" it would prove Scott's point, maybe not, we shan't find out answers in his article. If those "Other" iOS users are all running 10.3 it's a very different story than if they're on iOS 6...


He links directly to the source of that data and on that site you can download a CSV of the aggregated data.

  "iOS 13.3",27.03
  "iOS 12.4",16.19
  "iOS 12.3",14.2
  "iOS 13.1",10.22
  "iOS 12.2",7.89
  "iOS 12.1",4.16
  "iOS 13.4",4.02
  "iOS 13.2",2.68
  "iOS 10.3",2.31
  "iOS 11.4",2.14
  "iOS 9.3",1.94
  "iOS 11.2",1.03
  "iOS 12.0",0.86
  "iOS 13.0",0.82
  "iOS 11.0",0.81
  "iOS 11.3",0.72
  "iOS 6.0",0.45
  "iOS 10.2",0.4
  "iOS 11.1",0.32
  "iOS 7.1",0.18
  "iOS 13.5",0.17
  "iOS 10.1",0.17
  "iOS 9.1",0.15
  "iOS 7.0",0.15
  "iOS 10.0",0.14
  "iOS 6.1",0.13
  "iOS 5.1",0.12
  "iOS 9.2",0.12
  "iOS 8.4",0.11
  "iOS 8.1",0.1
  "iOS 8.3",0.06
  "iOS 9.0",0.06
  "iOS 5.0",0.04
  "iOS 8.0",0.03
  "iOS 4.3",0.03
  "iOS 8.2",0.02
  "iOS 3.2",0.01
  "Other",0.02


"Impending" doom?

Devices from 10 years ago are _already_ useless online.

Not only are the root certificates completely expired, but let's see which protocols does an openssl 0.9.8 build from a decade ago have in common with, say, current Gmail's IMAPS server: NONE.


I am really conflicted when configuring my servers. I can see the need to deprecate insecure protocols but deliberately cutting off compatibility with old clients feels wrong.


My TP-LINK Wifi-N router bought in 2010 still works nicely on latest OpenWRT and I assure you it runs software more up to date than 99% of closed firmware devices currently on the market. Open-source firmware is a way to go to stay current and to avoid producing more electronic waste.


ISRG root is 2 years old. And even uses RSA.


We had to deal with the sectigo / addtrust expiration. It was annoying because we reached out to our cert vendor and they had claimed it would not affect us (only affecting “legacy” systems or old browsers), but then of course it did affect us. It was an easy resolution, but still annoying.

Of course, we still have people working on our stuff. If it were some abandoned hardware product I could see this being a disaster.


We too were affected by the Sectigo / Addtrust expiration. Google's Uptime Checks couldn't verify the validity of the certificate and reported our website was down. While this was a false alert and it was easy to verify and circumvent it just shows that even companies like Google aren't prepared for something like this.


Has anyone taken the offered training?

I'm pretty good with TLS, but could use more hands-on time with openssl and PKI, especially a deeper understanding of certificate chaining issues. Would be interesting to know if the training is targeted at "total beginner" or "intermediate looking to be advanced"


I’ve taken the training and I recommend it, Keep in mind though that TLS and PKI are broad subjects and covering every intermediate/advanced use case in a two days training won’t be possible. But yeah it does cover certificate chaining issues, cross signing etc.


This might be a stupid question, but... Why do root certificates have to expire at all? The article presents this as if it's an immutable law of physics.


We have no real means of revoking certificates whose private keys have leaked. Eventual expiration seems better than nothing. Also, ciphers tend to become obsolete, so enforcing churn will help keep things moving forward. I'm not an expert here though, there may be other reasons.


I feel like if a certificate gets compromised, the fact that it expires in 5 years won't be of any help from the security standpoint. There are already mechanisms in place to revoke certificates before they expire.

https://en.wikipedia.org/wiki/Certificate_revocation_list

https://en.wikipedia.org/wiki/Online_Certificate_Status_Prot...


> Eventual expiration seems better than nothing.

Does it? Because that is the cause of the 'impending doom' from TFA


This will also be an issue with some docker images built some years ago with an outdated cert store somewhere in there. Unmaintained images are, in effect, frozen in time.

I'll just mention glassfish here in case someone in the near future wonders why this or that application is failing: it has it's own cert store (different from the JVM one), and it is a bit out of date. Yes, I just ran into a github project whose last commit was 3 months ago (march 2020) and that wouldn't work with LE certificates, all because deep down the docker matrioshka, there was a glassfish with outdated certs.


This post honestly reads like an ad for the training course.. did it really need 8 mentions on the page?


We have our smart TV just acting as a display for a mini PC via HDMI connection. TV knows nothing about our WiFi so no phoning home.


Have you had fun hunting for and turning off all of the awful picture "enhancing" features yet? and then figuring out how to make it persistent.

Modern TVs really seem to ruin the video even though they are perfectly capable of just showing a color adjusted signal, quickly becomes obvious when you plug in a computer.


I don't know to be truthful. The TV remote has loads of buttons bottom the only ones I use are on/off, source to find the HDMI and volume sometimes. I'd love just to have a dumb monitor of the same size but they're not available. A bonus then would be no need to have a tv licence.


If you're referring to a UK TV licence, it doesn't matter what kind of monitor you're using; it's about what you watch (live TV, whether over the air or online, or any use of BBC iPlayer). Watching TV via a computer and a dumb monitor still requires a licence.

https://www.tvlicensing.co.uk/check-if-you-need-one

(Edited to add: Conversely, merely owning a TV does not require a license; if you're only using it as a computer or gaming monitor, and not watching [online] TV programmes on it, you should be fine.)


Unfortunately not. Having a tv set in the house here requires a licence even the thing is out of order


Interesting! I'm curious where "here" is, if you're happy to say...?


Potentially Ireland. That seems to match the licensing scheme here.


You almost need something in HTTP for a client to signal which root certificate is being used, then if it's old, push the new one (signed by the old one) in the response. There's plenty of potential for abuse with this, though.

Maybe root certs need to expire more often so implementors have to do OTA updates.


Who hasn't had a root ca issue in production. It's v painful.


A working device should never stop working because of an expiration of some made-up entity. It is a signs of compete failure in basic system design.


For sure. Add to the fact that in every crypto-library I’ve used, you can’t tell it to ignore the expiration date without also ignoring actually harmful certificates, and this is what you end up with. If devs could just say: yeah we are not ever going to update the certificate, I’d be ok with that. An expired certificate is not inherently untrustworthy, it should just mean that it can’t sign anything. They’d then die of their own accord or be revoked.

Tin foil hat: To me, it smells more like CA’s wanted to make some cash by forcing their customers to buy certificates every so often instead of actually solving the problem and are now being bitten by their own rules.


Once again the persuit of security theater to meet the requirements of commerce (which most of the web doesn't share) cause problems accessing the large parts of the normal web.

When are people going to learn that centralized cert authorities are just for commerce and only hurt the non-commercial web?


Curious - what do you suggest for non-commercial web, taking into account both privacy and security?


If those are your motivations then making the site accessible as an onion service over tor in addition to the http interface (and https self signed) seems reasonable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: