Hacker News new | past | comments | ask | show | jobs | submit | fanatic2pope's comments login

In this country we call them "food banks".

There is another far worse option that you seem to be missing. Why keep any significant number of people around at all?


So many billionaires talk about how the world needs more people that few actually question whether or not they mean that.

Meanwhile, they believe that AGI will render human workers (and humans?) obsolete, they understand that any resource is finite, including power, and although they talk big game about how it's going to be utopia, they have lived their entire lives being the most successful/ruthless in an economy that is, no matter what apologetics are spewed, a zero-sum game.

If I've lived in a world that has rewarded being duplicitous and merciless with great riches, and I know that having to share with an ever-increasing number of people also increases the likelihood that I won't live like a god-king, why wouldn't I sell a happy soma-vision while secretly hoping for (or planning) a great depopulation event?


Co-incidentally, Palin kept an office in Neal's Yard!

https://en.wikipedia.org/wiki/Neal%27s_Yard


It looks like those instructions are from 2011. CNC machines are pretty common these days and could help simplify it quite a bit.

For example: https://hub.shapertools.com/creators/5cfea3909fc9260017675dc...


routing or skill sawing a circle is not rocket science that requires the use of a cnc router thingie.


I've had the PID enabled version of Quick Mill Carola for a few years now, and I don't think I would recommend it. The #1 issue for me is the water reservoir and associated fill sensor. The water reservoir is too difficult to refill, too difficult to see how full it is, and the water sensor is really unreliable and will beep loudly and prevent warm up even when the machine is full. I ended up disabling the sensor. I won't be getting rid of my Carola, but I think there are better choices now, something like the ECM Puristika, perhaps.


I cannot even remember the last time I plugged in a web cam and it didn't work on Linux. Just yesterday I borrowed a USB inspection camera from a friend in order to help me run a new ethernet line to my shop. The kit came with a "WIFI dongle" that you are supposed to use with your phone and some random app, but instead I just plugged it into my laptop, fired up Cheese and it came up immediately.


External webcams are no problem, agreed.

New-ish laptops with intel tiger lake procesors use MIPI cameras with ipu6 and don't work out of the box or only on specific kernel/distro combos.

There are efforts to mainline the driver and it is better than 2 years ago but still a big step backwards[1]

[1]https://github.com/intel/ipu6-drivers


To Intel's credit ipu6 packs a ton a ton a ton of super advanced capabilities in. Having a good video pipeline is a huge edge. That it took a while for upstreaming to get really into gear on Linux does not super astound me. This feels like a place where we need to expect the open source world to have to find its purchase first before traction forward can really start.

This was a super shitty experience though. It really felt unplanned & chaotic. Hopefully some of the kernel architecture carved out for ipu6 is good & useful for running other video pipelines.

Most webcammers don't knowingly think heavily on color science, but ideally our devices can.


We owe Apple here.

The lack of device drivers for iOS means that manufacturers had to start getting serious about ensuring their devices follow USB class specifications, because otherwise they will not work on an iPad.

Linux (which has USB class drivers) has only benefitted from this.


While true, cameras in Linux are a bit of a hassle. Trying to get one webcam in two applications at the same time, cropping or rotating the camera in one app but not another, etc.

This is getting better/fixed with pipewire though.


If it cannot explain how it was proven, was it actually proven?


No. Funny how these discussions too often devolve into semantics lol.


Funny how people don't understand basic logic. If it is a proof in a logic, and the machine checked that proof, it is a proof, no matter that no human actually understands it.

A human doesn't need to understand the proof, they just have to understand why the proof is a proof.


The useful thing about proofs is that they are written in English (or another language), not formal logic. In general they can be mapped to formal logic, though. This means that people can digest them on an intuitive level. The actual goal of a proof is to create new knowledge (via the proof) which can be distributed across the mathematical community. If proofs exist but are not easily comprehensible, then they don’t accomplish this goal.


If a proof is understandable only by a few people in the world, I would say it fails as a proof, according to your definition. There are many such proofs out there right now. It doesn't help that many of them are wrong, albeit usually in a fixable way.

Making a proof formal doesn't mean it is not understandable any more. First, certainly a machine can "understand" it now. I think with AI improving what exactly such an understanding is worth, and what you can do with it, will increase a lot.

Secondly, the inscrutable tactics-based proofs featured in Lean (I have written such proofs in Isabelle in 1996) are actually quite old-fashioned for such a "modern" prover. Isabelle has long featured human-friendly syntax, and these proofs are certainly as understandable as English text, if written with care.

What we will soon get are proof assistants which allow you to write your proofs (and definitions) in English, but which are still fully formal and checkable. That is an immense help if you are producing new knowledge, because usually nobody else looks at your work with sufficient scrutiny in the first place. If you understand it, and in addition the machine "understands" it, I think that will be the gold standard for proofs very soon.

It will also help mathematicians to clean up their act ;-)


Well... assuming a human made no mistakes setting up that logic.


Of course. That falls under "understanding why the proof is a proof".


Now we only need to find that human that never makes mistakes and we're golden...


Luckily, that is not necessary. You can make many mistakes, until you arrive at a logic you are happy with. Then you talk with other humans about it, and eventually you will all agree, that to the best of your knowledge, there is no mistake in the logic. If you pick first-order logic, that has already been done for you.

Then you need to implement that logic in software, and again, you can and will mistakes here. You will use the first version of that software, or another logic software, to verify that your informal thoughts why your logic implementation is correct, can be formalised and checked. You will find mistakes, and fix them, and check that your correctness proof still goes through. It is very unlikely that it won't, but if it doesn't, you fix your correctness proof. If you can indeed fix it, you are done, no mistakes remain. If you cannot, something must be wrong with your implementation, so rinse and repeat.

At the end of this, you have a logic, and a logic implementation, which doesn't contain any mistakes. Guaranteed.


We just have different definitions of what a proof is. Hence, semantics.


I would rather say that I actually have a definition of what a proof is, and you don't. But feel free to prove me wrong (pun intended), and tell me yours.


If it cannot explain to a human*

Yes, it was still proven. If I don't speak English but come up with a proof, I can't communicate the proof, but I have still proven (ie created proof) it.


I love the idea of a cool cutting board and have myself made a few of them, but they are a consumable kitchen tool and don't last long with regular use unless you are really dainty with them. I use my cutting boards a lot so I tend to save the fancy boards for presentation and serving and for actual daily use I make simple side grain domestic wood (maple, cherry, or birch) boards that I can make quickly and run through my planer when they get smelly, stained, or overly damaged. Eventually they get too thin and I make a new one.


I cook 4-5 days a week every week and have never worn through a quality cutting board. Cheap plastic ones will die but good quality wood has lasted well for me. Epicurean also makes a composite board which has lasted many years for me 26th regular use.


I would guess that you are using a hard (sugar) maple cutting board or something similar. I prefer softer woods such as big leaf maple, cherry or birch in order to preserve my knife edge.


I agree that you'll never wear through a normal wood cutting board in normal use.

But you will scratch the surface immediately, and you should be cleaning it aggressively, so you gotta keep that in mind before putting days or weeks into making a beautifully polished, highly detailed work of art.


Besides which it seems to me that Apple has never really been against having a single giant corporation controlling everything you can see, do and say, they were just against that corporation not being Apple.


I think they're pretty good at resisting that tendency, IMO. They're not pre-Musk Twitter or anything like that. Do you have examples?


"Is the opposite of yes no?"


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: