Hacker News new | past | comments | ask | show | jobs | submit | pushrax's comments login

LSM tree storage engine vs time series storage engine, similar philosophy but different use cases


Maybe I misunderstood both products but I think neither Quickwit or Turbopuffer is either of those things intrinsically (though log structured messages are a good fit for Quickfit). I think Quickwit is essentially Lucene/Elasticsearch (i.e. sparse queries or BM25) and Turbopuffer does vector search (or dense queries) like say Faiss/Pinecone/Qdrant/Vectorize, both over object storage.


It's true that turbopuffer does vector search, though it also does BM25.

The biggest difference at a low level is that turbopuffer records have unique primary keys, and can be updated, like in a normal database. Old records that were overwritten won't be returned in searches. The LSM tree storage engine is used to achieve this. The LSM tree also enables maintenance of global indexes that can be used for efficient retrieval without any time-based filter.

Quickwit records are immutable. You can't overwrite a record (well, you can, but overwritten records will also be returned in searches). The data files it produces are organized into a time series, and if you don't pass a time-based filter it has to look at every file.


Ah I didn’t catch that Quickwit had immutable records. That explains the focus on log usage. Thanks!


https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6612475/ is a review of Britton's research discussed in the article. It presents several points of evidence with a coherent argument for why meditation brings benefits while an excessive level of meditation may cause adverse effects.


It's interesting that anyone even had to specify that excessive meditation could cause harm. Isn't the whole point of Buddhism to follow the "middle way"?


The problem with "excessive X causes harm" is that it is tautologically true. The real question is the quantitative level where it starts doing more harm than good. Nobody knows, but it's easy to say after the fact that something is "excessive".


Yea, but I think 10 days in a row of 12 hours a day meditation might fall on the "excessive" side, right? I mean, I live in a place where Buddhists are pretty common, and none that I know do anything like that. They might go to a retreat in the mountains, but they aren't just sitting there for hours trying to feel their body parts mentally, they do other things, like copy religious texts by hand or take nature walks to appreciate life.


Yes, but even a traditional retreat in the mountains like you described could be "excessive" to some.


Proper brokerages will pay you to lend your securities, transparently disclose their programs, and require consent. For example:

- https://www.interactivebrokers.com/en/pricing/stock-yield-en...

- https://www.fidelity.com/trading/fully-paid-lending

Some brokerages lend out spare cash (and pay a transparent interest rate, and are subject to strict reserve requirements) but the majority of assets controlled by a typical brokerage are securities. FTX assets were all subject to their leverage strategies, with no specific reserve.


JPEG XL implements lossless image compression, but that's definitely not the most interesting feature. It also implements lossless JPEG recompression. So your existing JPEGs can be served with ~20% less bandwidth, without quality loss.

Unlike AVIF, JPEG XL also has advanced progressive delivery features, which is useful for the web. And if you look at the testing described in the post, JPEG XL also achieved higher subjective quality per compressed bit, despite having a faster encoder.


JPEG XL supports lossless, lossy and lossless JPEG recompression.

You can see lossless benchmarks against other formats here:

https://docs.google.com/spreadsheets/d/1ju4q1WkaXT7WoxZINmQp...


Lossless JPEG recompression, if it’s so good, can be done at the HTTP layer.

If a new image format doesn’t have a hardware decoder it’s dead. The security surface of new formats is unacceptable if it’s going to be slow and power-hungry too.

Only problem with JPEG is the lack of HDR.


JPEG XL as a HTTP Content Encoding:

1) transfer JPEG XL,

2) decode the JPEG XL to DCT coefficients,

3) encode a new JPEG1 file

4) decode the new JPEG1 file

5) render pixels

JPEG XL as image format:

1) transfer JPEG XL

2) decode the JPEG XL to DCT coefficients

3) render pixels

Two additional coding steps (3 and 4) are needed in the HTTP Content Encoding approach. If we want to transfer lossless JPEG1s, it is less computation and a faster approach to add JPEG XL as an image codec.

If JPEG XL is too powerful and creates danger for AVIF, then one possibility is to remove features such as adaptive quantization, lossless encoding and larger (non-8x8) DCTs. This effectively makes JPEG XL as JPEG1 recompressor as an image codec.

Also, JPEG XL's reference implementation (libjxl) has a more accurate JPEG1 decoder than any other existing implementation. Asking someone else to paint the pixels leads to worse quality (about 8 % worse).


Production audio and video recorders generate or intake an SMPTE timecode signal, and stamp recordings with this timecode.

This timecode format is a timestamp with seconds resolution plus a frame count within each second. To properly sync, all the timecode generators must use the same framerate. In other words, the audio recorder’s timecode framerate needs to match the camera.


Yes, sound needs to be recorded with proper metadata, otherwise the sync process with the image is going to be pretty tedious. We could just record with a "dumb" audio recorder that doesn't write timecode and fps metadata and it would sync up by hand to any camera FPS (23.976, 24, 25, 29.97, etc). It's not just practical for any professional projects.

The funny thing with timecode, which is hh:mm:ss:ff, is that the frame count is done at 24 frames, even at 23.976. So 1 frame of 23.976 is longer in actual "real time" duration than 1 frame at 24 fps. This can get confusing when going from and to 24/23.976.

There are more sophisticated workflows where the audio is recorded at 48.048 kHz (0.1% faster sample rate) called audio pull-up (or pull-down). The technique is used when shooting, for example, a TV spot, with a film camera at 24 fps. Since the 24 fps picture will be played back at 23.976 at the edit, the audio will follow the same speed down because it will itself play at 48.000 kHz instead of 48.048 kHz. I'm not sure that many productions still shoots TV spots in film, though, contrary to fiction where film is still being used sometimes.


But the timestamp is not recording actual seconds, right? Half the time it's recording intervals of .999 seconds. That is much weirder to handle than merely having the right framerate.


The time code is always incrementing by one frame at the given frame rate. For any of the NTSC-derived framerates, there are then two ways of incrementing. You can increment as if the frame rate is integral--so after 30 frames at 29.97 fps, your timestamp will show that 1 second has passed. The other option is a "drop-frame" timecode, where you skip over certain numbers when incrementing.

In all cases, the time code increments at the frame rate you are using.


Shortwave radio can be transmitted around the curve of Earth by ionospheric reflection and refraction so fewer repeaters are needed. This allows crossing vast oceans where microwave infrastructure might not be possible.

As you say the downside is available bandwidth and throughput.


Another good point, and one I thought about before replying, but that doesn't make microwaves slower, it makes them inapplicable.


In theory having fewer repeaters improves latency, probably in the range of 100ns per repeater. I don't know how much of a practical effect that has, likely very minimal with modern implementations.

Either way it's more sensible to build high throughput microwave networks given the tiny amount of shortwave bandwidth we have.


I bet it's way lower than 100ms for this application. Single digits. Maybe less.


I wrote 100ns, and meant an analog repeater. Full digital regeneration is probably in the 1-10μs range.


That’s why most markets close during night time.


No it isn’t.

Most markets, in terms of their daily volume, are open at night, but very thinly traded until EU hours, but some do see action in Asia hours. It’s just about liquidity.

Maybe you’re thinking of single name equity markets, which are a fraction of daily trading.


I think they’re probably asking about US markets. Afaik those do close at business hours and I’m not sure how after hours trading happens but it might not be available to most people.

I think the one of the main reasons is government concerns about shenanigans happen overnight without oversight / flash crash. That being said I presume all the breakers that would halt trading activity are probably automatic but potentially not all of them (I think the US did things like that during the housing collapse).


Video description:

“Chosen by fair dice roll. Guaranteed to be random.”


Worst case you just add more layers, like we do with sound absorbers now.

If some kind of moth wing inspired metamaterial can shrink the width of broadband sound absorbing panels by 10x that would be quite useful still.


Good points, it hopefully will lead to metamaterials that are at least better than currents solutions


Reaper and FabFilter getting on board would definitely help make this standard gain some momentum.

The list of “Following companies and projects are already evaluating CLAP for their host and plug-in software” is pretty impressive and personally accounts for a large amount of my workflow. Hopefully it gets big enough for Native Instruments to implement, Kontakt could benefit significantly from some of the new features.


I wonder if it can be integrated with iPlug2, since that is derived from Cockos' WDL framework (which iPlug2 is based upon)

Given that the code for both lives in the same github account, I am hoping for 'Yes'


> I'd be bothered with my own stupidity if i did not cheat.

What? I'd be bothered with my own stupidity if I felt I couldn't get a good grade without cheating. I put in a minimal amount of time/effort in university where the subject matter didn't interest me, but always did fine without cheating.


jumping through hoops to get a degree doesn't seem stupid enough to you already? Unfortunately, this is what our world is now, and you have do play this game if you want to be in the middle class. The least you can do is make the process less painful by cheating your way through.


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: