Hacker News new | past | comments | ask | show | jobs | submit | schobi's comments login

I understand that historically it was 16 2/3 Hz and power transfer from to the 50 Hz grid was done via 1:3 mechanical coupling with a motor and generator. That was a ratio easily achievable with gears.

Nowadays with switched power supplies, this is not a problem any more. Keeping track of 16.7 Hz seems a little easier. Imagine building a numeric display for a power plant operator to see how far off you are.


> Imagine building a numeric display for a power plant operator to see how far off you are.

You could build a display with 3 as the denominator, and a decimal numerator:

   | |_   2.1
   | | \  ---
   | \_/   3

I've built a KNX house about 10 years ago and I'm still quite happy. Let me share some experience:

Having the light switch on the smartphone does not make it any smarter, just more complex.

The following automations are the most valuable for me:

- automatic blinds. Go down when too much sun hits the facade, go down when dark outside, go up with too much wind. No concern leaving for work, and coming back to an overheated living room (no AC needed). But still automatically collect the direct sun in winter/spring.

- motion sensors, turn on lights when dark and motion in the room, every room

- night mode - low level motion-activated light in all bedrooms, corridor and bathroom. No automatic lights in bedroom, orientation lights on, night light sockets on, blinds down

This brought me to rarely touching a button/switch, twice a day, maybe?

And then there is toys

- blinds can fully close when room is empty, but go to half tilted with presence, angles following the sun, for maximum natural light without direct sun

- TV lowers the blinds behind that would give a reflection

- opening the terrace door opens the blinds and turns off indoor lights, to not attract mosquitos (idk if that even helps)

- shower motion sensor turns ventilation on high

- some sockets go on/off for Christmas lights

- logging of appliances, water, ventilation, heating.

I like that the low level stuff in KNX does not need/have a central hub. But the higher level requires extra smarts. I plan to migrate those to Home Assistant this year.


Do the motion sensors add value when compared to independent motion sensors or humidity sensors?

My sense is that with independent dumb motion sensors you achieve much of a smart house with less cost, wifi dependency.


mosquitos are indeed not attracted by light (other insects are). I believe they are attracted by CO2 (breath) blood and sweat.


I'm glad they only thought about it, but did not implement SF6 flooding of the chamber. No need to vent it to the atmosphere just for fun.

But controlling the internal pressure? I would expect that only a small difference would be needed - far from exploding or imploding the clock! Maybe small enough to have a reservoir and control only inlet and outlet valves? This could be done purely without modifications of the mechanism.


The control force would be the product of the air density change and the pendulum bob volume. If he increased the pendulum bob volume with a light but fixed-size object (a foam-filled sphere?) he wouldn't need to adjust the density as much.


The biggest danger of explosion/implosion is the change to/from summer time.


What I understand from the presentation

Many Volkswagen cars somehow report telemetry. Looks like there is data not only from the EVs based on the MEB plattform? But for a Name/email to be associated with the VIN of the car, the owner has to register and use the app (once). Many EV owners did, but fewer of the non EVs did.


The article says "at the start of the join operation the bits will be set in the bloom filter".

So maybe this is built for every query? Would be needed anyway if there is a where clause for the joined table.


I recently learned about an activity at CERN. They try to transport antiprotons to a different site for better analysis. "BASE STEP" https://home.cern/news/news/experiments/base-experiment-take...

Surprisingly, antimatter can be stored for months. They built a 2.5 ton storage box, ultra high vacuum and magnetic trap and a loading mechanism. In a first trial they loaded 70 protons and drove around the campus on a truck.

Impressive that this is even possible. But with 2.56 * 10^-15 Wh/kg still orders of magnitude from current batteries.


> They built a 2.5 ton storage box

The article says 1000 kilograms, that is 1 ton.


1 tonne


Yeah, which works out to 2200 lbs, give or take.

Sounds like someone (LLM) then treated lbs like KG to turn that into tonnes again?

Dumb. Easy to do if not paying attention.


Great idea and writeup!

One important feature is missing: From a proper search function I would expect to know how often my string is found. It could be that my password is rare, or that it is rather common. I need to know! Could the search also display the number of hits?

Jokes aside - you know the number of digits of the search string and if it is still a valid uuid. So computing the number of "matches found" should be possible...


I could imagine some candidates starting with their default tools like this, and start complaining about the cluster performance after a few weeks.

You need a certain way of thinking to have a gut feeling "this could be expensive" and then go back, question your assumptions and confirm your requirements. Not everyone does that - better to rule them out.


Most people miss, that you do not need a full and fast charging infrastructure at home. I rarely need to fill the car from empty. I rarely need to jump on the next multi hour trip right away.

If you "stay with family" I assume this is a few hours or overnight. Even slow charging from a regular outlet gives enough over night or a 6 hour stay. In Europe a regular outlet can give 3-4kW, so 6 hours is enough to go another 100km.

At work they installed a lot of 11kW chargers. Sure - some might need them, but most people would be fine with topping up their cars every day on a single phase charger. You park there for 6-8 hours, even at 3-4kW that would be enough for a daily 200km commute (which is rare, that guy can go to the 11kW charger).

I stayed in rural Italy with really old crappy electricity. Even there I could hookup the car on single phase at 1kW and keep charging. Two days later it was full again.


Most people miss that most people in the world live in the apartments and not in the private houses. I wouldn't be able to charge EV at home even via 1A usb cable, simply because there is no wiring whatsoever on the parking.


This is crazy to me.

So you have an increased concentration of people and parking, and so there is NO WAY to more efficiently make charging infra for that?

The only way we can do home charging is in geographically semi-sparse suburbia?

Come on. Apartment buildings should be STRONGLY incented by local governments to provide charging infrastructure, even if it is simply regular power outlets not even L2, to apartments.

Urban cities have air quality problems. PHEVs/EVs solve a huge part of air quality. SUBSIDIZE the charging infrastructure. I'm sure the power company will LOVE to take some grants.


I fully agree. But the actual EU reality (Poland) is that right now, this year, if you will go to the new modern (and really expensive relatively) apartment development and ask about EV parking spots, they will either tell you that nothing is wired, or that there are 1 or 2 spots per whole parking where you can later pay to install charger, which may or may not be already sold. And these spots are more expensive that other regular ones.


giving money to build shit always works out. I think $6 trillion might be roughly enough for couple of buildings :)

https://www.politico.com/news/2023/12/05/congress-ev-charger...


I used an L1 charger at home when I first bought my car; I'm familiar with the process and speed. But I didn't have the foresight to carry the L1 cord with me in my carry-on luggage, nor did I want to buy one to leave behind, so I was missing that very critical component. Rentals do not include any cables, so I had no way to go from a 5-15 outlet to a J1772 or NACS vehicle.


Wow - that rental situation sounds painful. Almost like malicious compliance to discourage EV usage.

Around 2019 I rented an EV from a retired enthusiast, and this included all bells and whistles, charging cards and cables. Pricey back then, but great experience. This is how you convince people


> I assume this is a few hours or overnight. Even slow charging from a regular outlet gives enough over night or a 6 hour stay.

Not even close. We don't have a fast charger at home, so just charging from regular outlet. We charge from midnight to 3pm, or 15 hours a day (these are the cheaper hours with PG&E, although still a ripoff).

That's not enough to charge fully in a day. Fortunately my partner only goes to work every other day, so it's ok. If we needed the car every day, it wouldn't work.


I really like the aestectics, even if physically wrong at the edges. Thanks for sharing the details.

As a embedded developer, I feel this is kind of wasteful. Every client computes an "expensive" blur filter, over an over again? Just for blending to a blurred version of the background image?

I know - this is using the GPU, this is optimized. In the end, this should not be much. (is it really?)

<rant> I feel the general trend with current web development is too much bloat. Simple sites take 5 seconds to load? Heavy lifting on the client? </rant>... but not the authors fault


I guess everybody has their own preconceptions of what's wasteful.

I grew up in the era of 14.4k modems, so I'm used to thinking that network bandwidth is many, many orders of magnitude more scarce and valuable than CPU time.

To me, it's wasteful to download an entire image over the Internet if you can easily compute it on the client.

Think about all the systems you're activating along the way to download that image: routers, servers, even a disk somewhere far away (if it's not cached on the server)... All that just to avoid one pass of processing on data you already had in RAM on the client.


I have the same perspective regarding bandwidth, but I also consider any client to be running on a computer at least ten years old and at least three OS revisions behind.

I like to consider myself a guest on a client CPU, GPU, and RAM. I should not eat all their food, leave an unflushed turd in their toilet, and hog the remote control. Be a thoughtful guest that encourages feelings of inviting me back in the future.

Load fast, even when cell coverage is marginal. Low memory so a system doesn't grind to a halt from swapping. Animate judiciously because it's polite. Good algorithms, because everyone notices when their cursor becomes jerky.


"Mips – processing cycles, computer power – had always been cheaper than bandwidth. The computers got cheaper by the week and the phone bills stayed high by the month." - The Star Fraction, 1995


each visitor brings their own cpu to do this work whereas the server bandwidth is finite


I'm confused though.

If the goal is to optimize for server bandwidth, wouldn't you still want to send the already-blurred photo? Surely that will be a smaller image size than the pre-blurred full res photo (while also reducing client-side CPU/OS requirements).


We don’t know the aspect ratio of the client window before-hand and on web, there are a lot of possibilities! So if any pre-blurred image is meant to peek out around the edges, those edge widths are dynamic. Otherwise, a low-res blurred image plus high-res non-blurred edges might be less bandwidth if overhead is low enough.


Okay but how do you compute an image? How would your browser -- or any other client software -- know what's the hero image of a blog that you never visited before, for example?

I feel like I am missing something important in your comment.


The article describes computational method of rendering frosted glass effect. You can achieve the same thing by rendering the effect once (then upload to a sever) and have client download the rendered image. Or you can compute the frosted glass effect. What's better? That's the argument.


It's like people forgot what graceful degradation and progressive enhancement is.


Ah, sorry, I didn't make it as far in the article.

IMO it really depends on the numbers. I'd be OK if my client downloads 50KB extra data for the already-rendered image but I'll also agree that from 100KB and above it is kind of wasteful and should be computed.

With the modern computing devices we all have -- including 3rd world countries, where a cheap Android phone can still do a lot -- I'd say we should default to computation.


Most of those websites that are technically "wasteful" in some ways, are way more "wasteful" when you realize what we use them for. Mostly it's for pure entertainment.

So either entertainment is wasteful, or if it's not, spending more compute to make the entertainment better is OK.


I would say most websites are wasteful wrt the customer, which is usually advertisers. There are websites where the user is the customer, but they’re rare these days.


I would argue that while it _feels_ wasteful to us humans, as we perceive it as a "big recomputation of the rendered graphics", technically it's not.

the redrawing of anything that changes in your ui requires gpu computation anyway, and some simple blur is quite efficient to add. Likely less expensive than any kind of animations of dom objects thar aren't optimized as gpu layers.

additionally, seeing how nowadays the most simple sites tend to load 1+ mb of JS and trackers galore, all eating at your cpu ressources, Id put that bit of blur for aesthetics very far down on the "wasteful" list


I generally agree - caveat is for some values of "some simple blur" - the one described in the article is not one in my book.

For reference, for every pixel in the input, we need to average 3x^2 pixels, roughly, where 3 is actually pi and x is the radius.

This blows up quite quickly. Not enough that my $5K MacBook really breaks a sweat with this example. But GPUs are one of the most insidious things a dev can accidentally forget to account for not being so great on other people's devices


Isn't sending both the blurred and non-blurred picture over the network the way we did it since two decades in web dev? With (many!) high resolution pictures this is definetly less performant then a local computation, given that real networks have finite bandwiths, in particular mobile clients on spots with bad wireless coverage. It is astonishing what can be done with CSS/WebGL only these days. We needed a lot of hacks and workarounds in the past for that.


A blurred image shouldn't be very much extra over the high resolution image considering it's information content is much smaller.


I don't have much data myself but when I was doing scraping some time ago I had thousands of examples where f.ex. the full-res image was something like 1.7MB and the blurred image was in the range of 70KB - 200KB, so more or less 7% - 11% of the original. And I might be lying here (it's been a while) but I believe at least 80% of the blurred images were 80KB or less.

Technically yes you could make some savings but since images were transferred over an HTTP-1.1 Keep-Alive connection, I don't feel it was such a waste.

Would love to get more data if you have it, it's just that from the limited work I did in the area it did not feel very worth of only downloading the high-res image and do the blur yourself... especially in scenarios when you just need the blurred image + dimensions first, in order to prevent the constant annoying visual reflow as images are downloaded -- something _many_ websites suffer from even today.


IMO it is time to seriously realise that most of this "ooh looks cool, surely I/we need that" tech has no place in this world. Whether or not the act itself is wasteful (although it generally is in tech...), the thought process itself indicates a bigger problem with society. Why do we need this thing? Why do we consider being without the thing to be bad? Like seriously, at the scale of issues in society today, who cares if your UI panel is blurred or not?


As per the central limit theorem one can approximate Gaussian with a repeated convolution with any function, box blur being most obvious candidate here. And box blur can be computed quickly with a summed area table.


> a repeated convolution

I really wonder what's the field of reference of "quickly" there. To me convolution is one of the last resort techniques in signal processing given how expensive it is (O(size of input data * size of convolution kernel)). It's of course still much faster than gaussian blur which is still non-trivial to manage at a barely decent 120fps even on huge Nvidia GPUs but still.


How are we supposed to think about SIMD in Big-O? Because this is still linear time if the kernel width is less than the max SIMD width (which is 16 I think on x64?)


I guess eventually it's a trade-off between doing heavy lifting yourself and paying a little more compute and bandwidth, or offloading it to clients and wasting more energy but at lower cost to the developer. I think there are environmental arguments in both directions (more energy spent computing stuff on the client vs more energy sending pre-computed assets over the networks). I'm not sure which is better ultimately - I suppose it varies case-by-case.


First, I really like the effect the author has achieved. It's very pretty.

Now for a bit of whimsy. It's been said that a picture is worth a thousand words. However, a thousand words uses far less bandwidth. What if we go full-tilt down the energy saving path, replace some images with prose to describe them? What would articles and blog posts look like then?

I know it's not practical, and sending actual images saves a lot of time and effort over trying to describe them, but I like the idea of imagining what that kind of web might look like.


With a standardized diffusion model on the receiving end, and a starting point image (maybe 16x16 pixels) with a fixed seed, we could send images with tiny amounts of data, with the client deciding the resolution (deciding how much compute to dedicate) as well as whatever local flavor they wanted (display all images in the style of Monet…) bandwidth could be minimized and the user experience deeply customized.

We’d just be sending prompts lol. Styling , css, etc all could receive similar treatment, using a standardized code generating model and the prompt/seed that generates the desired code.

Just need to figure out how to feed code into a model and have it spit out the prompt and seed that would generate that code in its forward generation counterpart.


To consistently generate the same image, we’d all have to agree on a standard model, which I can’t see happening any time soon. They feel more like fonts than code libraries.


I mean, yeah, but here we’re talking about a knowledge based compression standard, so I would assume that a specific model would be chosen.

The interesting thing here is that the model wouldn’t have to be the one that produces the end result, just -a- end result deterministically produced from the specified seed.

That end result could then act as the input to the user custom model which would add the user specific adjustments, but presumably the input image would be a strong enough influence to guide the end product to be equivalent in meaning if not in style.

Effectively, this could be lossless compression, but only for data that could be produced by a model given a specific prompt and seed, or lossy compression for other data.

It’s a pretty weird idea, but it might make sense if thermodynamic computing or similar tech fulfills its potential to run huge models cheaply and quickly on several orders of magnitude less power (and physical size) than is currently required.

But that will require nand-scale, room temperature thermodynamic wells or die scale micro-cryogenic coolers. Both are a bit of a stretch but only engineering problems rather than out-of-bounds with known physics.

The real question is whether or not thermodynamic wells will be able to scale, and especially whether we can get them working at room temperature.


I’m pretty sure the radio on a mobile device consumes more energy than the GPU doing a 2D operation on a single image.

If you want to save energy, send less data.


I recently had a shower thought that the bigger you go, more energy you need to do computation. As in you could make a computer out of moving planets. On the other hand you could go small and make a computer out of a tiny particle. Both scales achieve the same result but at very different costs.


There is a sci-fi series that I am absolutely blanking on that features that concept - I remember a few characters each having access to a somewhat godlike ability to manipulate physics, and using it to restructure the universe to create computers to augment their own capabilities - definitely some planetary stuff and some quantum / atomic level stuff.. hmmmm maybe gpt can help


This sounds vaguely like the series that Peter F. Hamilton has written, possibly the Commonwealth Saga?

https://en.wikipedia.org/wiki/Peter_F._Hamilton

https://en.wikipedia.org/wiki/Commonwealth_Saga


Man ‘Judas Unchained’ sounds like a real familiar title but the synopsis does not right any bells.

I think the artifact that could be tuned to pull power from celestial bodies was sold to the human / or traded or gambled or something - by an alien entity named ‘wheeler’ maybe?

The protagonist had a friend named Zero or Zeno that was kind of a techno mystic?

Idk my memories are pretty spotty


The book is Signal to Noise and its sequel A Signal Shattered by Eric S. Nylund.

A personal favorite of mine! Wheeler is such a convincing villain, and the existence of such a character as an explanation of the Fermi paradox is eminently believable.

https://en.wikipedia.org/wiki/Eric_Nylund


would it happen to be "Zones of Thought" by Vernor Vinge?


Ooh no it is not, but I am coincidentally working my way through the third book in that series!


Tbh I think people radically underestimate how fast, and efficiently so, GPUs are. The Apple Watch has physically based rendering in the UI. It would be curious to compare the actual cost of that versus using a microcontroller to update a framebuffer pushed to a display via SPI.

I did some webgl nonsense like https://luduxia.com/showdown/ and https://luduxia.com/whichwayround/ . This is a experimental custom renderer with DoF, subsurface scattering and lots of other oddities. You are not killed by calculation but memory access, but how to reduce this in blur operations is well understood.

What there is not is semi transparent objects occluding each other, because this becomes a sorting nightmare and you would end up having to resolve a whole lot of dependencies on this dynamically. (Unless you do things with restricting blending modes). Implementing that in the context of widgets that move on a 2D plane with z-index sorting is enormously easier than in a 3D scene though.


This is wasteful and can actually cause perf issues if used really heavily.

I worked on a large applications where the design team decided everything should be frosted. All the text over images, buttons, icons, everything had heavy background blur and mobile devices would just die when scrolling with any speed.


Did my site take > 5 seconds to load?

I put a lot of effort into minimizing content. The images are orders of magnitude larger than the page content but should be async. Other assets barely break 20 kB in total aside from the font (100 kB) which should also load async.


It depends, it’s seen as an expensive operation when you have to fit it into the rendering budget of a game, for a website it’s probably nothing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: