Hacker News new | past | comments | ask | show | jobs | submit | cipherself's comments login

Pygfx uses webgpu while VisPy uses OpenGL.


This is indeed one of the major differences. Many of the problems that are plaguing Vispy are related to OpenGL. The use of wgpu solves many of them.

Also, wgpu forces you to prepare visualizations in pipeline objects, which at drawtime require just a few calls. In OpenGL there is way more work for each object being visualized at drawtime. This overhead is particularly bad on Python. So this particular advantage of wgpu is extra advantageous for Python.


This is ~albeit not being very deep~ very cool.


I'm curious, why would that be worth mentioning?


It tells me that the team behind this project have merits, they are experienced. It’s not some random people who had an idea. They probably saw a market for something.


i normally don't use appeal to authority as a form of evidence. The project/product should stand on its own merits, regardless of pedigree.


Absolutely, and that’s a valid point. But to me, seeing who’s behind a product or project is interesting and sometimes helpful.

For example (might be a bad one), the Astro.build project was a random project to me at first sight, but after digging into it I learned that it’s the same people (Fred Schott) behind Snowpack!

So then this turned into something very interesting. Why? Because now I know that Astro is backed by people who have a good knowledge of running a project, and years later my gut feeling was right.


> years later my gut feeling was right

and that's confirmation bias. what you haven't seen are the myriad of projects backed by reputable people that failed. So next time, your gut feeling is going to give you a misinformed truth.

It's why i always point out each time someone appeals to authority. It's annoying i know, but i think a lot of people just don't think about their own biases.


That’s a fair point but I would not agree on your assumption of me not witnessing failed projects, I think you were too quick to call it confirmation bias.

I have had wrong on multiple occasions and ended up depending on an abandoned project. I understand the message you’re trying to convey, someone fully relying on authority is a complete wrong approach. BUT I do think looking at someone’s or somethings past can help you get a little insight, just a little.


It’s both confirmation bias and appeal to authority, however, you can’t fully remove those biases anyway. the best you can do is just be aware of them and scrutinise projects despite their pedigree, or your previous experience. (It’s clear that you also understand this, no i’m in no way disagreeing with you)


How does one learn more about virtual power plants? Are there books you'd recommend? I am interested in learning both about the tech involved and the market dynamics.


I wrote a book on the topic based on the research I started in university!

Virtual Power: The future of energy flexibility.

My friends and family who guilt-tripped themselves into buying it to support me said it's surprisingly readable! Note: it is more market, policy, and energy technology focused, so not a deep dive into the networking and software.


Wow that's cool, ordered a copy as well. Thanks!


Very cool, ordered :)


Contact info is in the book if you have any questions!


Nice!


I really like the US Department of Energy 'state of affairs' document called The pathway to virtual power plants commercial liftoff (https://liftoff.energy.gov/vpp/).


This report was incredibly useful when writing this article.


Nice, thanks. I've download the "Full Report" and will give it a read.


From the screenshots, it seems very much to be about tracing.


I am kinda confused by your comment, OP is about tracing as far as I understood, but you're referencing Google Cloud Monitoring (whereas the comparable thing would be Google Cloud Trace) and then again

  (esp. for metrics and their attributes).
but OP isn't about metrics at all rather traces.


Hi, I'm one of the developers of Logfire. We do support metrics! Our vision is a single pane of glass observability where you can easily go from metrics <-> traces both via correlation and by creating metrics/dashboards from traces.

We also support logging as an integrated concept into tracing (you can emit events without a duration that are just like a log but carry context of where they were emitted from relative to a trace).


Google Cloud Monitoring (https://console.cloud.google.com/monitoring), perhaps not what you call it, supports the big three signals—logs, traces, metrics.


One slightly related thing you can do is to test the API with schemathesis[0]

[0] https://github.com/schemathesis/schemathesis


Generating `HTML` from lisps has poisoned any other approach for me, see for example https://www.neilvandyke.org/racket/html-writing/, https://reagent-project.github.io/, and https://edicl.github.io/cl-who/


That's very nice, I wonder if we can write a sort of dsl for this in python.



The problem with this is that it doesn’t map nicely to HTML (or more generally XML) as do s-expressions.


My https://pypi.org/project/xml-from-seq/ is a simple mapper from e.g.

    ['p', 'This is a ', ['a', {'href': 'https://example.com'}, 'link']]
to

    <p>This is a <a href="https://example.com">link</a></p>


Interesting, a while back at $EMPLOYER while working in a PoC, I found myself in a situation where I need to take user-provided Postgres SQL queries and run them against one of our APIs. Roughly, I converted the API response to a `pandas dataframe`, I then parsed and transformed the query from the postgres dialect to the duckdb dialect using `sqlglot` and used `duckdb` to query the `pandas dataframe`, coverted it into `json` and returned it to the user.


Evidence does something similar - dumping data to parquet:

https://docs.evidence.dev/core-concepts/data-sources

https://news.ycombinator.com/item?id=35645464

I whish the implied ETL step was even clearer from the homepage - it's not really feasible for us to dump entire tables to the dev machines for working with production data - but it is an interesting concept.


That's a really creative way to solve that problem! I would have spun up a temporary pg instance etc, but this is much nicer.


That's interesting. I'm looking to do something similar, but need wire compatibility with postgresql. This allows any postgres client to talk to our service. Didn't have a lot of luck finding a good "middleware"


I think this is probably the missing piece for you? https://github.com/jwills/buenavista


Thanks. This looks promising!


you could consider hosting an empty postgresql database, compile your code as a postgresql foreign data wrapper and expose it as a view. Nothing is more compatible with the postgres wire protocol than postgresql itself ;)

turbot compiles their steampipe plugins in this way. Example: https://github.com/turbot/steampipe-plugin-net


That's a good idea and we considered FDW (for this and more stuff), but having a middleware makes it more flexible - FDW has limitations around pushdown with subselects, we are still constrained to a single postgresql instance for execution, when in theory we could parallelize (certain) queries across nodes.


Yes, also the text

  or download on Mac
is not hyperlinked.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: