Hacker News new | past | comments | ask | show | jobs | submit | Magnap's comments login

Qalculate! has been my go-to calculator on my laptop for years, very happy to have it on my phone now too! And it definitely knows planet radii, try `planet("earth"; "radius")`. Specifically, it knows a bunch of attributes about atoms (most importantly to me, their atomic mass) and planets (including niche things like mean surface temperature). You can see all the data here: https://qalculate.github.io/manual/qalculate-definitions-fun...


Woot! I need to read the docs better! Thanks for sharing that :D


The issue is not so much that it's cellulosic as that it's lignocellulosic; we have a pretty good grasp these days on cellulose (and hemicellulose, but we can handle that too so I'll skip over it). Lignin is (oversimplifying, but not by too much) what makes the difference between soft paper pulp and hard woodchips. The lignin crosslinks the (hemi)cellulose and makes it much harder to access for the enzymes we'd use to break down the cellulose, while itself being very difficult to break down (it's thought that the whole reason we have coal is due to how long it took for fungi to evolve the enzymes needed to break down lignin effectively in order to degrade wood).

You need a bunch of equipment and/or biochemical processing to break down lignocellulosic plant matter into something that can be efficiently fermented (keyword: "lignocellulosic biomass pretreatment"), so while thay may be available at a lab scale, it's not necessarily possible on an industrial scale (yet).

As one example, there's a method called "steam explosion" where you apply very hot (around 160-260 °C) and high-pressure (tens of atmospheres) steam to the biomass, then release the pressure relatively fast. The hot steam causes chemical reactions like hydrolysis, and the pressure release breaks down the material physically as the steam expands. Imagine the sort of equipment you'd need to do that on an industrial scale. Nowhere out of reach for modern chemical engineering, but someone has to build it.

Now, as you might imagine from how relatively non-woody switchgrass is, it doesn't have a ton of lignin compared to trying to ferment woodchips, but it still has enough to be problematic, and in fact there's research being done on how to reduce its lignin content, such as by genetic engineering.


"next-generation sequencing" is a term of art in this case


I see, it sounded like a buzzword term (which I guess it still might be). The point is that the current title makes it sound like a general comparison, while the original title makes it clear that it's comparing three implementations of a single tool.


Dane here. Yes, it's because of limited testing capacity (new machines arrived recently and they are ramping up to 1000 tests per day, additional capacity will among other things go to surveillance of cases with mild symptoms, modeled on our existing system for influenza-like illness surveillance where samples are, well, sampled among a specific subset of GPs), but it's not only those with severe symptoms who are tested, it's anyone who is hospitalized (which also includes members of vulnerable populations with only moderate symptoms) and health care workers, with the goal being to prevent hospital-acquired infections. In a single sentence, the strategy could be summed up as "people who aren't in the hospital should assume it's COVID-19 if they have symptoms and act accordingly, but once they get to the hospital we can't afford to assume".


Have you considered learning stenography? The Open Steno Project is very DIY, and I'm sure they'd be thrilled to help think up an even more mobile design for a steno writer (though AFAIK they're already very usable when walking around with a harness). When it comes to novel input methods, I really think it's a great idea far too few people know about.


I noticed that the paper mentions constructing a triangulation of a set of initial sample points, which would allow for the optimization domain to be any convex polytope. Is this something you are planning to explore/incorporate?

EDIT: I see now that this is not an implemention of the algorithm in the paper, but its own algorithm entirely. Even so, it appears to me the same technique could be used in this case.


Hi there,

Yes, I hope to add that feature soon-ish. The reason I left it out of this first release is that computing Delaunay triangulations scales very poorly with dimensionality. That was why TRIOPT was restricted to low dimensional spaces. I will have to implement a more efficient way to compute nonoverlapping triangulations. In the mean time, you can simply make the optimization domain bigger to encapsulate the shape of your problem domain.


I think the point is that you can build a boat out of bricks, and you can build neural networks out of linear regression. But that doesn't make modern AI "just fancy linear regression" the same way modern boats aren't "just fancy bricks".


The theoretical property those hacks were aiming at showing is called the transitive reduction. And you don't need any hacks to calculate it, graphviz comes with the `tred` tool to do so.


In some cases you can forget bad input, though, can't you? It's a rather wasteful example, but if (like you say, having established trust beforehand) every element of a grow-only set is a signed message, you can reject any messages that don't validate against their signature when merging. On one hand, this breaks idempotence. On the other, if we regard that set as a subset of the set of validly signed messages, it could not have been there in the first place, so to speak, so I wouldn't immediately think it breaks any guarantees. As far as I can tell, any properly functioning machine will only ever see correct state this way, unless it receives some state and doesn't merge it with anything, for example when initially starting to participate. However, that's easily fixed by always having some state, for example by starting with the empty set in this case.


> you can reject any messages that don't validate against their signature when merging.

That's not really what I meant by "bad". I meant bad as in intent, not structural and immediately verifiable message integrity.

If all entities in the coalescing set can independently verify that a message does not meet its signature requirements, it will be rejected and idempotence is maintained.

If ONE member is somehow deceived about key validity it'll propagate the message into every other member's state, eventually.


For the record, what you're smelling for is acetic acid (vinegar). When aspirin (acetylsalicylic acid) hydrolyzes into salicylic acid, the other product of that reactions is acetic acid, which is what's in vinegar.


Thanks! It's been 25 years since my last pharmacology class, the details are all getting a bit vague.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: