Hacker Newsnew | past | comments | ask | show | jobs | submit | jzelinskie's commentslogin

I've written a reference because I've had this conversation so many times, haha

https://jzelinskie.com/posts/p-what/


There's even more. In games, we also use the term 'Producer' which manages the production, ie they're mostly a project manager. The design team is the product design role. Even here, the producers can and will dabble in design discussions in so far as is needed to meet timelines.

The nice thing is no one gets inflated with a manager title they think makes them the boss of every department. You get engineering lead, production lead etc.


After reading about the recommendation system breakthrough[1], I'm more curious about just how much we're leaving on the table with classical algorithms. If you raised the amount of money being funneled into quantum computing and spent it purely funding classical algorithm research, would you be better off?

[1]: https://www.quantamagazine.org/teenager-finds-classical-alte...


Shhh, big tech needs a new dragon to chase iff GenAI stops being shiny.

Non-cynic take: exploration-based endeavors still may end up in useful developments.


This is an incredibly useful one-liner. Thank you for sharing!

I'm a big fan of jq, having written my own jq wrapper that supports multiple formats (github.com/jzelinskie/faq), but these days I find myself more quickly reaching for Python when I get any amount of complexity. Being able to use uv scripts in Python has considerably lowered the bar for me to use it for scripting.

Where are you drawing the line?


Hmm. I stick to jq for basically any JSON -> JSON transformation or summarization (field extraction, renaming, etc.). Perhaps I should switch to scripts more. uv is... such a game changer for Python, I don't think I've internalized it yet!

But as an example of about where I'd stop using jq/shell scripting and switch to an actual program... we have a service that has task queues. The number of queues for an endpoint is variable, but enumerable via `GET /queues` (I'm simplifying here of course), which returns e.g. `[0, 1, 2]`. There was a bug where certain tasks would get stuck in a non-terminal state, blocking one of those queues. So, I wanted a simple little snippet to find, for each queue, (1) which task is currently executing and (2) how many tasks are enqueued. It ended up vaguely looking like:

    for q in $(curl -s "$endpoint/queues" | jq -r '.[]'); do
        curl -s "$endpoint/queues/$q" \
        | jq --arg q "$q" '
            {
                "queue": $q,
                "executing": .currently_executing_tasks,
                "num_enqueued": (.enqueued_tasks | length)
            }'
    done | jq -s

which ends up producing output like (assuming queue 0 was blocked)

    [
        {
            "queue": 0,
            "executing": [],
            "num_enqueued": 100
        },
        ...
    ]
I think this is roughly where I'd start to consider "hmm, maybe a proper script would do this better". I bet the equivalent Python is much easier to read and probably not much longer.

Although, I think this example demonstrates how I typically use jq, which is like a little multitool. I don't usually write really complicated jq.


> wrapper that supports multiple formats

Is there a way to preserve key ordering, particularly for yaml output? And to customize the color output? Or, how feasible is it to add that?


I could Google it, but tell a bit more about uv scripts. Isn't uv a package manager like pip?


uv has a feature where you can put a magic comment at the top of a script and it will pull all the dependencies into its central store when you do “uv run …”. And then it makes a special venv too I think? That part’s cloudier.

https://docs.astral.sh/uv/guides/scripts/

Makes it a snap to have a one file python script without having to explicitly pip install requests or whatever into a venv.


Example usage for those who haven't seen it yet:

  #!/usr/bin/env -S uv run --script
  #
  # /// script
  # requires-python = ">=3.12"
  # dependencies = ["httpx"]
  # ///
  
  import httpx
  
  print(httpx.get("https://example.com"))


> dependencies = ["httpx"]

I heavily recommend writing a known working version in there, i.e. `"httpx~=0.27.2"`, which, in the best case, would allow fixes and security patches (e.g. when httpx 0.27.3 releases), and, in the worst case, would let you change to `~=` to `==` in case httpx manages to break the backwards compatibility with a patch release.

And, of course, always use `if __name__ == "__main__":`, so that you can e.g. run an import check and doctests and stuff in it.


Import checks and doctests you can run before any script code anyway, and exit() if needed. The advantage of `if __name__ == "__main__":`, is that you can import the script as module on other code, as in that case __name__ will not be __main__.


That is amazing! I might use this instead of bash for some scripts.

I could imagine a Python wrapper script that parses the Python for import statements, then prepends these comments and passes it off to uv. Basically automating the process.


May I also add this ain't a mere one liner. It's a masterclass!


Just wanted to say thanks for such a good write-up and the great work on Otter over the years. We've used Ristretto since the beginning of building SpiceDB and have been watching a lot of the progress in this space over time. We've carved out an interface for our cache usage a while back so that we could experiment with Theine, but it just hasn't been a priority. Some of these new features are exciting enough that I could justify an evaluation for Otter v2.

Another major for on-heap caches that wasn't mentioned their portability: for us that matters because they can compile to WebAssembly.


I actually modified SpiceDB to inject a groupcache and Redis cache implementation. My PoC was trying to build a leopard index that could materialize tuples into Redis and then serve them via the dispatch API. I found it easier to just use the aforementioned cache interface and have it delegate to Redis.


Another good comparison would be against https://pkg.go.dev/github.com/puzpuzpuz/xsync/v3#Map


definitely, I can expand my comparisons and benchmarks


Honestly, if you're better than the rest, I would suggest collaborating with the existing solutions.


I recommend folks check out the linked paper -- it's discussing more than just confidentiality tests as a benchmark for being ready for B2B AI usage.

But when it comes to confidentiality, having fine-grained authorization securing your RAG layer is the only valid solution that I've seen in used in industry. Injecting data into the context window and relying on prompting will never be secure.


Is that sufficient? I'm not very adept at modern AI but it feels to me like the only reliable solution is to not have the data in the model at all. Is that what you're saying accomplishes?


Yes. It's basically treat the model as another frontend approach - that way the model has the same scopes as any frontend app would.


Why wouldn't the human mind have the same problem? Hell, it's ironic because one thing ML is pretty damn good at is to get humans to violate their prompting, and, frankly, basic rational thought:

https://www.ic3.gov/PSA/2024/PSA241203

Or, more concretely:

https://edition.cnn.com/2024/02/04/asia/deepfake-cfo-scam-ho...


Container runs OCI (docker) compatible by creating lightweight VMs.

This repository houses the command-line interface which is powered by containerization[0], the Swift framework wrapping Virtualization.framework to implement an OCI runtime.

[0]: https://github.com/apple/containerization


I am going to show my ineptitude by admitting this, for the life of me I couldn’t get around to implement the Mac Os native way to run linux VMs and used vm-ware fusion instead. [0]

I’m glad this more accessible package is available vs docker desktop on mac os or the aforementioned, likely to be abandoned vmware non enterprise license.

[0] [apple virtualization docs](https://developer.apple.com/documentation/virtualization/cre...)


Lima makes this really straightforward and supports vz virtualization. I particularly like that you can run x86 containers through rosetta2 via those Linux VMs with nerdctl. If you want to implement it yourself of course you can, but I appreciate the work from this project so far and have used it for a couple of years.

https://lima-vm.io/


And you also have `colima`[0] on top of it that makes working with Docker on the command-line a seamless experience.

[0] https://github.com/abiosoft/colima


VMware Fusion is a perfectly good way of running VMs, and IMO has a better and more native UI than any other solution (Parallels, UTM, etc)


This is a weird take to me.

VMWare Fusion very much feels like a cheap one-time port of VMWare Workstation to macOS. On a modern macOS it stands out very clearly with numerous elements that are reminiscent of the Aqua days: icon styles, the tabs-within-tabs structure, etc.

Fusion also has had some pretty horrific bugs related to guest networking causing indefinite hangs in the VM(s) at startup.

Parallels isn't always perfect sailing but put it this way: I have had a paid license for both (and VBox installed), for many years to build vagrant images, but when it comes to actually running a VM for purposes other than building an image, I almost exclusively turn to Parallels.


> reminiscent of the Aqua days

Maybe early Aqua. We're still in the Aqua days, if you don't count yesterday's Liquid Glass announcement. :)


Not on Apple Silicon it's not. In the Intel days, sure it was great.


I still can run the latest ARM Fedora Workstation on Apple Silicon with Fusion, and similar distros straight from the ISO without having to tweak stuff around or having problems with 3D acceleration, unlike UTM.


The screenshot in TFA pretty clearly shows docker-like workflows pulling images, showing tags and digests and running what looks to be the official Docker library version of Postgres.


Every container system is "docker-like". Some (like Podman) even have a drop-in replacement for the Docker CLI. Ultimately there are always subtle differences which make swapping between Docker <> Podman <> LXC or whatever else impossible without introducing messy bugs in your workflow, so you need to pick one and stick to it.


If you've not tried it recently, I suggest give the latest version of podman another shot. I'm currently using it over docker and a lot of the compatibility problems are gone. They've put in massive efforts into compatibility including docker compose support.



Yeah, from a quick glance the options are 1:1 mapped so an

  alias docker='container'
Should work, at least for basic and common operations


The Brooklyn Public Library in Green Point has a tool library, although it isn't very large if that's close to you at all. I'm not sure if it's available at any other library locations but the one in Green Point is fairly new and has great programming.


Thanks for sharing this! I think we'll add a list of existing tool library so it will be easier to find the ones near you.


No commentary on your latter points about Oxide's compensation structure, but I fundamentally don't share the same sentiment you have about the dynamics of cash flow for venture-backed startups.

Maybe there are still VC-backed companies having catered food, but I think they're by far the exception and not the rule. ZIRP is long over and a decent portion of this generation of startups began in COVID and subsequently don't even have an office. Maybe I'm the one that's in the bubble, but when you take VC money you're on the line to hit growth numbers in a way that you aren't when you bootstrap and can take your time to slowly grow once you've hit ramen profitability.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: