Hacker News new | past | comments | ask | show | jobs | submit | joebiden2's comments login

The wordy article boils down to: the baths were infested by bacteria and worms due to insufficient water cleaning. Interesting, but could be compressed to 2-3 paragraphs without losing any detail.

Thanks for posting though.


Some other interesting things from the article that stuck with me:

- Most toilets weren't connected to the sewers because they didn't want to have the smell of the sewers in their houses - The public latrines were considered a bad choice because of lack of privacy and shared spunges (based on texts on the walls) - The sewers were built mainly for the convenience of not having to transport water to and from within the city, not necessarily for hygiene reasons - Apparently some emperor did realize that it wasn't smart to bathe with sick and healthy people at the same time but decided the sick would bathe before the healthy

Of course, if you only wanted to know if they cleaned the water in bath houses sufficiently, then your summary suffices as well.


> Apparently some emperor did realize that it wasn't smart to bathe with sick and healthy people at the same time but decided the sick would bathe before the healthy

Maybe he thought they were more in need of cleansing.

Even with today's knowledge, without application of today's tech (overnight chlorination say) it wouldn't really make a difference? Today's first bathing session comes after yesterday's second.


A simple bioactive sand filter would probably be sufficient https://en.wikipedia.org/wiki/Biosand_filter


That too came a bit late:

> proposed by Dr. David Manz in the late 1980s [...]


I'm curious as to why you chose that particular vehicle. It's an interesting choice for someone on HN. What convinced you that this car was right for you?


The concentration of lithium in seawater is quite low, approximately 0.17 milligrams per liter (mg/L) or 0.17 parts per million (ppm).

To determine how much seawater is needed to obtain 1 gram of lithium, you can set up a proportion:

Given: 1 liter of seawater = 0.17 mg of lithium x liters of seawater = 1,000 mg (1 gram) of lithium

Using cross-multiplication:

x = (1,000 mg * 1 liter) / 0.17 mg x ≈ 5,882.35 liters

Thus, you would need to process approximately 5,882.35 liters (or about 5.88 cubic meters) of seawater to obtain 1 gram of lithium.

In practice, extracting lithium from seawater is more challenging due to its low concentration and the presence of other elements. Techniques have been proposed and researched, but as of my last update in 2021, they were not commercially competitive with other sources of lithium like mineral deposits.


I wonder is there's a certain depth at which contains higher concentrations


Not sure why this was flagged. Unsubstantiated with citations, and not differentiated enough, sure, but not wrong.

https://pubmed.ncbi.nlm.nih.gov/6725561/

This is just the first google result, there are countless others. And it is quite common knowledge.


Why would everything have to be "substantiated with citations"?

It's a comment in a discussion, not a research contribution. It should be judged (not "peer reviewed" as some research claim, but judged and accepted or not as a comment in a discussion) based on whatever it has merit to the best of the participants knowledge, like any other claim.

Not based on whether it comes with a bibliography. If someone wants, they can go and verify it.


Does anyone know if there's a method to effectively "subscribe" to news pertaining to topics like this? I'd greatly appreciate being able to keep up with the latest research in certain fields, such as this one.


I'm working on such a project now. The idea being to use NLP to track the subjects on HN and index their occurrences. When completed, it will be open source.

A direct answer to your question: not that I am aware of, but would like to learn more.


Well, that’s the whole premise of the linked website (the conversation): the latest research for a wide audience. Maybe you can subscribe to their science or astronomy sections.


This post seems to be upvoted for the "uncensored" keyword. But this should be attributed to https://huggingface.co/ehartford and others.

See also https://news.ycombinator.com/item?id=36977146

Or better: https://erichartford.com/uncensored-models


The latter link had a major thread at the time:

Uncensored Models - https://news.ycombinator.com/item?id=35946060 - May 2023 (379 comments)

We definitely want HN to credit the original sources and (even more so) researchers but I'm not sure what the best move here would be, or whether we need to change anything.


Thanks, didn't notice this discussion.


Ollama forks llama.cpp. The value-add is marginal. Still I see no attribution on https://ollama.ai/.

Please instead of downvoting, see if this is fine from your point of view. No affiliation at all, I just don't like this kind of marketing.

See also https://news.ycombinator.com/item?id=36806448


It would be nice to add some attribution but llama.cpp is MIT licensed so what Ollama is doing is perfectly acceptable. Also, Ollama is open source (also MIT). You can bet any for-profit people using llama.cpp under the hood aren't going to mention it, and while I think we should hold open source projects to a slightly higher standard this isn't really beyond the pale for me.

While you find the value-add to be "marginal" I wouldn't agree. In the linked comment you say "setting up llama.cpp locally is quite easy and well documented" ok, but it's still nowhere near as fast/easy to setup as Ollama, I know, I've done both.


Running make vs go build? I don't see much difference

I personally settled on the text-generation-webui


You seem to really disregard the positions of this author. They seem to have invested substantial efforts in that specific area of research.

To validate the idea the author has, it would be required to train a LLM from zero. If the author is right, you would get similar results to the current generation of LLMs, but with (a lot) less space required for the intermediate layers.

The time to achieve that is still measured in kilo- to mega-dollars, why is it wrong to put that idea in the open to substantially criticize or adopt?


You don't need to train a ChatGPT-sized LLM, a toy nanoGPT would have been enough. You can train those on a consumer GPU in an afternoon.

And yes I do disregard his research effort. There are hundreds of well-justified and well-researched "clever tricks" for improving Transformers, and almost all of them don't work. I'll believe it when I see the results.


Outliers only begin to appear around 3B parameters (as per the original LLM.int8 paper) so unfortunately not consumer GPU in an afternoon kinda stuff to prove you've managed to suppress them.


I tried to test this with nanoGPT in an afternoon, since the code change is pretty minimal. It's hard to get conclusive results at that scale though - to be able to say anything with confidence you'd need to run multiple tests, figure out if the 'outliers' mentioned only appear above a certain scale, find good tests for quantization performance that work on small enough models that you can iterate quickly ... It's doable but still lots of work, enough that putting out the idea and hoping others with more time+compute will try it out seems a valid strategy to me :) More generally though I definitely agree that the trend among 'improvements' to transformers has been things that don't turn out to work in practice.


Google used it in flaxformers since 2021 apparently


Do you know of handy testing steps? I suppose I could ask ChatGPT, but if someone has a validated "here, this is how you do it" I have a 3090 that I can do it on, but I'm not keen to debug anything here.


Testing steps (based on thinking about this for 30 seconds - so probably can be improved):

Train a Transformer based model with and without the modified Softmax (Suggestions: GPT-2 or nanoGPT)

Measure performance - I'd probably start with Perplexity and see if there is any difference (we'd expect little difference).

Quantize both models with different quantization strategies.

Measure the perplexity of the quantized models of different sizes. We'd expect the performance to drop off quicker for the non-modified model than the modified one if this is working.


I was thinking about a different problem as I was typing that and got some mental memory alias bug. I wanted to know a set of steps to take to train a model. My apologies.

In any case, that was an lmgtfy-level question. Here's what I found: https://til.simonwillison.net/llms/training-nanogpt-on-my-bl...

I shall try that soon.


Shaaaaameless plug:

I did a writeup like this. (Not as nicely as Simon though) where I modal.com (cloud GPU, containers, quick starts, free $30/m spend) to use their GPUs (e.g. T4, A100).

https://martincapodici.com/2023/07/15/no-local-gpu-no-proble...

T4 I think was good enough for the job, not much need for the A100.

Since this post I am working on an easy way to do this with a script called lob.py that requires no code changes to the nanoGPT repo (or whatever repo you are using) and runs in modal.com. The script exists but gets refined as I use it. Once it is battle tested a bit more I will do a post.

(It is named lob.py as it "lobs the code over to the server" where lob is UK slang for throw)

Watch this space.


Thank you. FWIW I often find write-up + script superior to script because I often want to modify. e.g. I want to run GPU-only, but other script provide part-way solution when textual description added. Therefore, much appreciated.


In the Qualcomm AI paper linked in this post it turns out they use a similar testing approach:

BERT 109M, testing perplexity

OPT 125M, testing perplexity

ViT 22M, testing on ImageNet top-1.


This is rather incredible, both in its simplicity and the fact that I never read about it yet. Thanks for posting.


What does this add over llama.cpp? Is it just an "easier" way to setup llama.cpp locally?

If so, I don't really get it, because setting up llama.cpp locally is quite easy and well documented. And this appears to be a fork. Seems a bit fishy to me, when looking at the other "top" comments (with this one having no upvotes, but still #2 right now).

(llama.cpp's original intention is identical to yours: The main goal of llama.cpp is to run the LLaMA model using 4-bit integer quantization on a MacBook¹)

¹ https://github.com/ggerganov/llama.cpp#description


The llama.cpp project is absolutely amazing. Our goal was to build with/extend the project (vs try to be an alternative). Ollama was originally inspired by the "server" example: https://github.com/ggerganov/llama.cpp/tree/master/examples/...

This project builds on llama.cpp in a few ways:

1. Easy install! Precompiled for Mac (Windows and Linux coming soon)

2. Run 2+ models: loading and unloading models as users need them, including via a REST API. Lots to do here, but even small models are memory hogs and they take quite a while to load, so the hope is to provide basic "scheduling"

3. Packaging: content-addressable packaging that bundles GGML-based weights with prompts, parameters, licenses and other metadata. Later the goal is to bundle embeddings and other larger files custom models (for specific use cases, a la PrivateGPT) would need to run.

edit: formatting


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: