Not trying to sound like a dork, but 9 GB -- that's nothing.
MacOS routinely eats 30-40 GB of HD space in Volumes/Preboot and "macOS Install Data" for god knows what. Plus even a bare bones XCode installation will cost around 20 GB, much more if you actually want to run your code on connected devices.
But of course saving disk space is not exactly #1 on Apple's priority list when the 512 GB and 1 TB disk upgrades sell at 80% margin.
Even 1TB isn't enough for me these days, especially since Lightroom requires your catalog be on your boot disk.
But 2TB and 4TB upgrades, in addition to a cost of I think $400/$800 respectively, require you move up a CPU model as well - a HUGE increase in cost that most users can't justify through improved performance.
Depedns on the use case. As someone using all 3 major desktop operating systems regularily, guess which one is my choice for performance critical, rugged, cheap applications.
And before someone says that mjcrosoft doesn't go after that space: a lot of ticket automats, ATMs, Kiosk displays etc. run Windows under the hood. And 9GB times the amount of devices out there are a lot of gigabytes someone has to pay.
And even if for that usecase there is a special slimmed down version of the OS one could argue that they would profit from their main desktop OS and their main embedded OS not drifting apart.
There's a subdirectory in the Windows folder (WinSxS) that can easily eat up 50+GB and deleting files there or in the Installer subdirectory can easily break things like updating any Adobe products you might have installed (which I've experienced directly and seen plenty of reports about online). 9GB is a joke and so is that article.
WinSxS (introduced with NT6.0, Vista) is the component store. It stores most of system files which are versioned and it's what solved DLL hell. Files in other system folders are hardlinks to them.
On my 1 year old install, it's 9GB with only 4GB allocated i.e. with no hardlinks outside WinSxS. So delete stuff outside instead of messing up with this folder.
I agree. I don’t run Windows, but even a relatively cheap laptop will have a 1tb disk. It seems silly to worry too much about an extra 9 gigs used by the OS.
Look, I'm just not interested in something that gives me code on the CLI. This is no better or worse than using ChatGPT / Canvas or any other external tool.
My project has well over half a million lines of code. I'm using an IDE (in my case Qt Creator) for a reason. I'd love to get help from an LLM but CLI or external browser windows just aren't the way. The overhead of copy/paste and lack of context is a deal breaker unfortunately.
In case I'm missing something, please let me know. I'm always happy to learn.
What I'm trying right now is two IDEs -- PyCharm for navigating around and static analysis stuff, Cursor for "how do I do this" or "please do this for me." Cursor (VSCode?) lets you choose Jetbrains keyboard shortcuts during setup and is relatively resource light so it's not too bad.
Aider operates on your file tree / repo and edits and creates files in place. So it at least lessens the copy / paste drastically. This is a very different experience than using chatgpt or Claude on web. Still not ideal UX compared to having it in the IDE though to be clear.
It was a well planned and executed publicity stunt:
1. Maximize attention on social media by being super obnoxious and arrogant ("dawg I ChatGPT’d the license")
2. 1 day later while the chatter is still going, write a mea culpa and take on the poor victim role ("grew up in a single mother household on government subsidies")
3. --> Repair most of the reputational damage but keep all the attention.
None of this is illegal, but it's exploiting a system of mutual trust and I wouldn't want to live in a world where everybody acted like that.
P.T. Barnum once said, “There’s no such thing as bad publicity,” which is almost as good as Oscar Wilde’s version, who put it like this: “There’s only one thing in the world worse than being talked about, and that is not being talked about.”
Just a nitpick: please use lowercase 'm' for 'million'. When I read the title I thought it had something to do with the company 3M (not really known for making SaaS).
Without taking away anything from the substance or achievement of this release, I find phrases like "openpilot is an operating system for robotics." always quite fishy.
No, it's not an OS for robotics. You can't do actual robotics stuff with it, like drive actuators to control limbs or grippers, do motion control or SLAM or perception or any of the usual robotics stack.
Their website correctly says openpilot is an open source advanced driver assistance system that works on 275+ car models of Toyota, Hyundai, Honda, and many other brands. Should've stuck to that.
Thinking about it some more, it's probably just another engagement baiting strategy to get attention and I'm their gullible puppet. Well played.
George Hotz says: "we developed a proper successor to ROS. openpilot has serialization (with capnp) and IPC (with zmq + a custom zero copy msgq). It uses a constellation of processes to coordinate to drive a car."[1] And Comma sells a robot that runs Openpilot: https://comma.ai/shop/body
> You can't do actual robotics stuff with it, like drive actuators to control limbs or grippers, do motion control or SLAM or perception or any of the usual robotics stack.
A lot of the "usual robotics stack" is not going to be relevant for the new wave of consumer robotics that is coming soon. It will be enabled by end-to-end machine learning and stuff like traditional SLAM methods will not be a part of that. The Bitter Lesson[2] is coming for robotics.
I enjoy Hotz as a hacker, but I'm really allergic to this kind of oversold language. "[W]e developed a proper successor to ROS" is a past tense statement, as if they've already done this thing. In reality, at best they have presented a roadmap for a thing that could approximate ROS one day.
In the robotics community, the stuff coming out of George Hotz has always been considered a kludgy mess, and unsuitable for serious work. Dude is a talented hacker, but the idea that this will replace ROS is kind of a joke.
The point of the bitter lesson is "leverage compute as best you can" not "use DNNs everywhere just because". Oftentimes your available compute is still a crappy ARM machine with no real parallel compute where the best DNN you can run is still not large nor fast enough to even be viable, much less a good fit.
And well some classical algorithms like A* are mathematically optimal. You literally cannot train a more efficient DNN if your problem needs grid search. It will just waste more compute for the same result.
Besides, the nav stack is not really the point of ROS. It's the standardization. Standard IPC, types, messages, package building, deployment, etc. Interoperability where you can grab literally any sensor or actuator known to man and a driver will already exist and output/require the data in the exact format you need/have, standard visualizers and controllers to plug into the mix and debug. This is something we'll need as long as new hardware keeps getting built even if the rest of your process is end to end. It doesn't have to be the best, it just needs to work and it needs to be widely used for the concept to make sense.
The future of consumer robotics will not be built on "a crappy ARM machine with no real parallel compute". Traditional robotics has failed to produce machines that can operate in the real world outside of strictly controlled environments, and more of the same isn't going to change that. Fast hardware for running DNNs will be a hard requirement for useful general purpose robots.
I agree that it'll be needed, but that hardware that can provide enough compute at acceptable wattage is yet to materialize. Only once that changes the equation will change. Today you'd be surprised how many production UGVs run off an actual Pi 4 or something in a comparable compute ballpark.
With due respect, this has to be one of the most ignorant takes on robotics I have read in a while. Yes, you can always slap serialization and ZMQ on your framework. That doesn't make it an OS.
And no, the usual robotics stack is not going away anytime soon. Maybe develop some actual useful robots before posting like an expert on robotics topics.
I believe the idea is that openpilot replaces the usual robotics stack with an end to end neural net.
While I agree operating system is usually a marketing term, it does feel correct in this case as it is the operating system for the Comma Three, which can operate cars but also this thing: https://www.comma.ai/shop/body
Could someone explain the joke? I've been dabbling with learning robotics and I've been confused by how ROS and ROS2 both appear to be actively developed/used. Is ROS2 a slow-moving successor version (like Python 3 was) or a complete fork?
Slow-moving successor, which the community isn't exactly going wild over. It offers modest improvements in exchange for a painful upgrade process, and many of the original issues with ROS1 remaining unsolved.
The other half of the joke is that ROS was never an operating system either.
Well there is one thing that ROS 2 does better, you can declare params directly inside nodes and reconfigure them all without building extra config files. And it doesn't stop working if your local IP changes.
But the rest are firmly downgrades all around. It's slower (rclpy is catastrophically bad), more demanding (CPU usage is through the roof to do DDS packet conversions), less reliable (the RMWs are a mess), less compatible (armhf is kill). The QoS might count as an improvement for edge cases where you need UDP for point clouds, but what it mostly does on a day to day basis is create a shit ton of failure cases where there's QoS incompatibility between topics and things just refuse to connect. It's lot more hassle for no real gain.
Config generally feels more complex though, since there isn't a central parameter server anymore. The colcon build system also just feels more complex now, which I thought was already impressively complex with catkin.
Yep it takes super long to get parameters from all nodes cause you need to query each one instead of the DDS caching it or something.
And yeah I forgot, there's the added annoying bit where you can't build custom messages/services with python packages, only ament_cmake can do it so you often need metapackages for no practical reason. And the whole deal with the default build mode being "copy all" so you need to rebuild every single time if you don't symlink, and even that often doesn't work. The defaults are all around impressively terrible, adding extra pitfalls in places where there were none in ROS 1.
No it’s much worse, python3 was all round better, it just took a while to get all your dependencies ported which made the transition hard. Judging by the comments it doesn’t seem like people agree that ROS2 is even all round better from ROS.
It's funny this topic came up today because I have a group of students working on a ROS2 project and at our meeting this afternoon they had a laundry list of problems they've been having related to ROS2. I'm thinking our best option is to use ROS1...
You're right ROS2 isn't all round better than ROS so the transition will never happen fully.
FWIW I'm working on an actual replacement for ROS, I'll post it to ShowHN one day soonish :P
Isn’t the software for training end-to-end NN to be used in automation?
Just a first version that it’s used for cars, and they have been using it for their own robot.
> This is equivalent to applying a box filter to the polygon, which is the simplest form of filtering.
Am I the only one who has trouble understanding what is meant by this? What is the exact operation that's referred to here?
I know box filters in the context of 2D image filtering and they're straightforward but the concept of applying them to shapes just doesn't make any sense to me.
The operation (filtering an ideal, mathematically perfect image) can be described in two equivalent ways:
- You take a square a single pixel spacing wide by its center and attach it to a sampling point (“center of a pixel”). The value of that pixel is then your mathematically perfect image (of a polygon) integrated over that square (and normalized). This is perhaps the more intuitive definition.
- You take a box kernel (the indicator function of that square, centered, normalized), take the convolution[1] of it with the original perfect image, then sample the result at the final points (“pixel centers”). This is the standard definition, which yields exactly the same result as long as your kernel is symmetric (which the box kernel is).
The connection with the pixel-image filtering case is that you take the perfect image to be composed of delta functions at the original pixel centers and multiplied by the original pixel values. That is, in the first definition above, “integrate” means to sum the original pixel values multiplied by the filter’s value at the original pixel centers (for a box filter, zero if outside the box—i.e. throw away the addend—and a normalization constant if inside it). Alternatively, in the second definition above, “take the convolution” means to attach a copy of the filter (still sized according to the new pixel spacing) multiplied by the original pixel value to the original pixel center and sum up any overlaps. Try proving both of these give the answer you’re already accustomed to.
This is the most honest signal-processing answer, and it might be a bit challenging to work through but my hope is that it’ll be ultimately doable. I’m sure there’ll be neighboring answers in more elementary terms, but this is ultimately a (two-dimensional) signal processing task and there’s value in knowing exactly what those signal processing people are talking about.
[1] (f∗g)(x) = (g∗f)(x) = ∫f(y)g(x-y)dy is the definition you’re most likely to encounter. Equivalently, (f∗g)(x) is f(y)g(z) integrated over the line (plane, etc.) x=y+z, which sounds a bit more vague but exposes the underlying symmetry more directly. Convolving an image with a box filter gives you, at each point, the average of the original over the box centered around that point.
There’s a picture of the exact operation in the article. Under “Filters”, the first row of 3 pictures has the caption “Box Filter”. The one on the right (with internal caption “Contribution (product of both)”) demonstrates the analytic box filter. The analytic box filter is computed by taking the intersection of the pixel boundary with all visible polygons that touch the pixel, and then summing the resulting colors weighted by their area. Note the polygon fragments also have to be non-overlapping, so if there are overlapping polygons, the hidden parts need to be first trimmed away using boolean clipping operations. This can all be fairly expensive to compute, depending on how many overlapping polygons touch the pixel.
OK, so reading a bit further this boils down to clipping the polygon to the pixel and then using the shoelace formula for finding the area? Why call it "box filter" then?
It’s very useful to point out that it’s a Box Filter because the article moves on to using other filters, and larger clipping regions than a single pixel. This is framing the operation in known signal processing terminology, because that’s what you need to do in order to fully understand very high quality rendering.
Dig a little further into the “bilinear filter” and “bicubic filter” that follow the box filter discussion. They are more interesting than the box filter because the contribution of a clipped polygon is not constant across the polygon fragment, unlike the box filter which is constant across each fragment. Integrating non-constant contribution is where Green’s Theorem comes in.
It’s also conceptually useful to understand the equivalence between box filtering with analytic computation and box filtering with multi-sample point sampling. It is the same mathematical convolution in both cases, but it expressed very very differently depending on how you sample & integrate.
Oh interesting, I hadn’t thought about it, but why does it seem like poor naming? I believe “filter” is totally standard in signal processing and has been for a long time, and that term does make sense to me in this case because what we’re trying to do is low-pass filter the signal, really. The filtering is achieve through convolution, and to your point I think there might be cases in which referring to the filter function as a basis function occurs.
I would think that conceptually that a basis function is different form a filter function because a basis function is usually about transforming a point in one space to some different space, and basis functions come in a set that’s the size of the dimensionality of the target space. Filters, even if you can think of the function as a sort of basis, aren’t meant for changing spaces or encoding & decoding against a different basis than the signal. Filters transform the signal but keep it in the same space it started from, and the filter is singular and might lose data.
It is more similar to the convolution of the shape with the filter (you can take the product of the filter, at various offsets, with the polygon)
Essentially if you have a polygon function p(x,y) => { 1 if inside the polygon, otherwise 0 }, and a filter function f(x,y) centered at the origin, then you can evaluate the filter at any point x_0,y_0 with the double-integral / total sum of f(x-x_0,y-y_0)*p(x,y).
This kind of makes sense from a mathematical point of view, but how would this look implementation-wise, in a scenario where you need to render a polygon scene? The article states that box filters are "the simplest form of filtering", but it sounds quite non-trivial for that use case.
If it essentially calculates the area of the polygon inside the pixel box and then assigns a colour to the pixel based on the area portion, how would any spatial aliasing artifacts appear? Shouldn't it be equivalent to super-sampling with infinite sample points?
It literally means that you take a box-shaped piece of the polygon, ie. the intersection of the polygon and a box (a square, in this case the size of one pixel). And do this for each pixel as they’re processed by the rasterizer. If you think of a polygon as a function from R^2 to {0, 1}, where every point inside the polygon maps to 1, then it’s just a signal that you can apply filters to.
But as I understand it, the article is about rasterization, so if we filter after rasterization, the sampling has already happened, no? In other words: Isn't this about using the intersection of polygon x square instead of single sample per pixel rasterization?
This is about taking an analytic sample of the scene with an expression that includes and accounts for the choice of filter, instead of integrating some number of point samples of the scene within a pixel.
In this case, the filtering and the sampling of the scene are both wrapped into the operation of intersection of the square with polygons. The filtering and the sampling are happening during rasterization, not before or after.
Keep in mind a pixel is an image sample, which is different from taking one or many point-samples of the scene in order to compute the pixel color.
The problem is determining the coverage, the contribution of the polygon to a pixel's final color, weighted by a filter. This is relevant at polygon edges, where a pixel straddles one or more edges, and some sort of anti-aliasing is required to prevent jaggies[1] and similar aliasing artifacts, such as moiré, which would result from naive discretization (where each pixel is either 100% or 0% covered by a polygon, typically based on whether the polygon covers the pixel center).
I agree. What I personally would love is a WYSIWYG front end to a static site generator that uses eex or erb. If the tool is sufficiently open source and works well with some hand tweaking of generated HTML, then eex/erb isn't strictly necessary.
I'm optimistic about this though, because my suspicion is that since this tool just exports React, you could relatively easily achieve this using Next.js SSG building. As long as you aren't doing any build time or runtime dynamic data loading, by adding one more step you can use this for that, with the bonus that if complexity goes up to the point where you would want to componentize, your tool is ready for that.
Pinegrow Web Editor and Bootstrap Studio could fit the bill. No subscription, no cloud, one-time purchase. Exported HTML is fully readable and editable outside the app.
> because my suspicion is that since this tool just exports React, you could relatively easily achieve this using Next.js SSG building
At the core, we generate pure HTML and CSS, then serialize those into React and Tailwind. It would be one less step to expose the HTML and CSS instead. I wanted a narrow scope to this so that's the focus but I imagine there's a plugin setup we can do to swap in what framework (or non-framework) you would need.
"the thing I actually want is just Webflow but without the BS and predatory pricing" - checkout Webstudio, it's free and open source - https://webstudio.is/
I'm not super deep into web development and since it doesn't have a demo or any other visual preview unfortunately I don't understand what this does or how it could be useful. Let alone how to "install" it for "my website".
I get where you're coming from. FWIW, at many points I considered just paying for Webflow but their pricing is just nuts. I'm simply not going to pay ~ $1k per year for them to host my static site with a couple 1000 visitors per month.
If Webflow had a competitor with same functionality and reasonable pricing I'd gladly pay for that.
reply