This is indeed one of the major differences. Many of the problems that are plaguing Vispy are related to OpenGL. The use of wgpu solves many of them.
Also, wgpu forces you to prepare visualizations in pipeline objects, which at drawtime require just a few calls. In OpenGL there is way more work for each object being visualized at drawtime. This overhead is particularly bad on Python. So this particular advantage of wgpu is extra advantageous for Python.
It tells me that the team behind this project have merits, they are experienced. It’s not some random people who had an idea. They probably saw a market for something.
Absolutely, and that’s a valid point. But to me, seeing who’s behind a product or project is interesting and sometimes helpful.
For example (might be a bad one), the Astro.build project was a random project to me at first sight, but after digging into it I learned that it’s the same people (Fred Schott) behind Snowpack!
So then this turned into something very interesting. Why? Because now I know that Astro is backed by people who have a good knowledge of running a project, and years later my gut feeling was right.
and that's confirmation bias. what you haven't seen are the myriad of projects backed by reputable people that failed. So next time, your gut feeling is going to give you a misinformed truth.
It's why i always point out each time someone appeals to authority. It's annoying i know, but i think a lot of people just don't think about their own biases.
That’s a fair point but I would not agree on your assumption of me not witnessing failed projects, I think you were too quick to call it confirmation bias.
I have had wrong on multiple occasions and ended up depending on an abandoned project. I understand the message you’re trying to convey, someone fully relying on authority is a complete wrong approach. BUT I do think looking at someone’s or somethings past can help you get a little insight, just a little.
It’s both confirmation bias and appeal to authority, however, you can’t fully remove those biases anyway. the best you can do is just be aware of them and scrutinise projects despite their pedigree, or your previous experience. (It’s clear that you also understand this, no i’m in no way disagreeing with you)
How does one learn more about virtual power plants? Are there books you'd recommend? I am interested in learning both about the tech involved and the market dynamics.
I wrote a book on the topic based on the research I started in university!
Virtual Power: The future of energy flexibility.
My friends and family who guilt-tripped themselves into buying it to support me said it's surprisingly readable! Note: it is more market, policy, and energy technology focused, so not a deep dive into the networking and software.
I really like the US Department of Energy 'state of affairs' document called The pathway to virtual power plants commercial liftoff (https://liftoff.energy.gov/vpp/).
I am kinda confused by your comment, OP is about tracing as far as I understood, but you're referencing Google Cloud Monitoring (whereas the comparable thing would be Google Cloud Trace) and then again
Hi, I'm one of the developers of Logfire. We do support metrics! Our vision is a single pane of glass observability where you can easily go from metrics <-> traces both via correlation and by creating metrics/dashboards from traces.
We also support logging as an integrated concept into tracing (you can emit events without a duration that are just like a log but carry context of where they were emitted from relative to a trace).
Interesting, a while back at $EMPLOYER while working in a PoC, I found myself in a situation where I need to take user-provided Postgres SQL queries and run them against one of our APIs. Roughly, I converted the API response to a `pandas dataframe`, I then parsed and transformed the query from the postgres dialect to the duckdb dialect using `sqlglot` and used `duckdb` to query the `pandas dataframe`, coverted it into `json` and returned it to the user.
I whish the implied ETL step was even clearer from the homepage - it's not really feasible for us to dump entire tables to the dev machines for working with production data - but it is an interesting concept.
That's interesting. I'm looking to do something similar, but need wire compatibility with postgresql. This allows any postgres client to talk to our service. Didn't have a lot of luck finding a good "middleware"
you could consider hosting an empty postgresql database, compile your code as a postgresql foreign data wrapper and expose it as a view. Nothing is more compatible with the postgres wire protocol than postgresql itself ;)
That's a good idea and we considered FDW (for this and more stuff), but having a middleware makes it more flexible - FDW has limitations around pushdown with subselects, we are still constrained to a single postgresql instance for execution, when in theory we could parallelize (certain) queries across nodes.