Hacker News new | past | comments | ask | show | jobs | submit | akiselev's comments login

A crimes against humanity tribunal at the Hague?



Looks like some of the contraptions in the video made by Akiyuki are for sale at BuildaMOC: https://buildamoc.com/search?q=akiyuki&type=product

Searching for GBC yields a bunch of other kits: https://buildamoc.com/search?q=gbc&type=product


Any way to get access? All the AI product waitlists are killing me and I was stuck for months on the last GHNext waitlist.


No, an LHC beam has energies of over 6 teraelectron volts; this device is only capable of a maximum of 12 kiloelectron volts.

According to the article if they make it 100 times more powerful it would be useful for producing radioisotopes for PET scans, protons for proton therapy, heavy ions for heavy ion therapy, etc. Currently the particle accelerators used in medical facilities are big and expensive so this would make them far more accessible to hospitals world wide.


True, of course, but miniaturization is always lurking in the background of every big physics project. When I was in college, we had a Nobel laureate in physics that won the race to photograph an atom by figuring out how to do it on a tabletop. While everyone else was figuring out how to build bigger and bigger systems, he worked out how to trap a single (I think) cadmium ion with magnets and make it glow.


Yes, they're applying the technique to miniature electronics, I'm wondering what the physical limits are to laser acceleration of particles (protons, say). It seems like you could transfer momentum to any particle traveling slower than might speed - so you could get a relatively high momentum; and you could accelerate neutral particles ...

Why would it not work?

For avoidance of doubt, I'm not imagining a miniature device, just possibly a linear device that is maybe smaller than LHC. They used to use linear arrays prior to development of synchro-cyclotrons, IIRC.

Seems like it would be somewhat akin to a beam of single particle light-sails.

Basically, I was imagining this - https://www.sciencedaily.com/releases/2023/07/230731110723.h... "New method improves proton acceleration with high power laser".

All my best ideas have already been done!


Tenochtitlan had a sophisticated system of urban planning with zoned public bathrooms (latrines, really) that were cleaned at night by a large labor force dedicated to keeping the city clean. The waste was collected by canoe from these bathrooms and dumped into the canal system where it decomposed and was later dredged up to fertilize and replenish the topsoil on the chinampas. Fresh drinking water was provided by mountain spring connected to the city by several aqueducts built in the 15th century.


A surprising amount! They were mostly pictorial but the Aztecs and their vassal cities were very meticulous about keeping government records of all kinds and even though almost all of Tenochtitlan's documents were destroyed during the battle, some of them were backed up in other cities. Since the conquistadors spent some time in the city as guests of Montezuma, we've also got first hand accounts from the soldiers and friars.


It's not a coincidence. The conquistadors spent months in Tenochtitlan as guests of Montezuma and wrote extensively about how amazing the urban planning was. They would have preferred to keep the city in tact all things considered.

Since the Aztecs had done all of the hard work of figuring how to build out drainage and stability with the chinampas, the Spaniards built their new buildings on top of the foundations remaining from the Aztec buildings. It then took several centuries to fill in all of the canals and turn them into streets so the layout of Mexico City very much reflects Tenochtitlan.

For example the Zócalo square is right where the Aztec ceremonial center used to be and I believe the Metropolitan Cathedral was originally built on top of the foundations of a minor temple that was built as part of the Templo Mayor complex.


Not an NLP expert but the biggest difference in my experience is guided focus, so to speak. When summarizing something huge like the US Code, for example, you can tell the LLM to focus on specific topics and anything adjacent to them so that it ignores irrelevant details (which is usually >99.9% of the text in my use case). The word relationships encoded in the LLM are really good at identifying important adjacent topics and entities.

LLMs are also really good at the harder NLP problems like coreference resolution, dependency parsing, and relations which makes a huge difference when using recursive summarization on complex documents where something like "the Commisioner" might be defined at the beginning and used throughout a 100,000 token document. When instructed, the LLM can track the definitions in memory itself and even modify it live by calling OpenAI functions.


Interesting so maybe not my trivial "summarize an article" example but clearly the upper bound on what's possible is higher and more interesting.


Might I ask how you use OpenAI's function calling here? That's the one bit of their functionality I haven't really explored.


I use OpenAI function calling most of the time I use the OpenAI API since that's the easiest way to get structured data and implement retry logic.

The simplest implementation is "retrieve_definition(word_to_lookup, word_to_replace)" with some number of tokens at the beginning of the prompt dedicated to definitions. You can use a separate LLM call with a long list of words (without their definitions) to do the actual selection since sometimes there might be ambiguity, which the LLM can usually figure out itself (it can also include both definitions when it's too uncertain if instructed).

A more complex variant does multiple passes: first pass identifies ambiguous words in each chunk, second pass identifies their definitions, third pass does actual summarization using the output of the previous passes to craft the prompt.


I’m no expert and I don’t know the specifics of this vertical garden but trees and shrubs of that size usually have deep and wide root systems that give them incredible resilience. Without that root system and huge volumes of dirt to support it, the plants probably require a ton more monitoring by actual botanists instead of just landscapers.

I don’t think the problem is just the amount or cost of maintenance but how slow it is: the maintenance staff work on the outside of the building hanging from the top. Safety procedures alone would eat a lot into that work time.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: