Raspberry shake is ok for seeing if there was a quake, but there are in fact several improvements I've wanted to make if I had the time.
Starting with adding a long range wireless or cellular connection so it can be away from human activity, also I've wanted to improve the timestamp accuracy to well beyond +-10ms and I'm pretty sure it can be done with a tcxo and a GPS receiver.
I also think an actual raspberry pi isn't strictly needed and so the power consumption can be reduced a bit by sticking to microcontrollers
> I would tend to trust an Arduino more than an ESP in critical infrastructure.
Arduino has always come across to me as being for hobbyist/beginner/learning, and critical infra would use things more like, for example, STMicro or Texas Instruments microcontrollers
> That's what I really fear about LLMs, good content is going to get drowned out by endless bot drivel.
I anticipate the opposite.
Already AI content is better than crappy human writers.
In the medium term AI content is going to be better than most human writers (it arguably already is in some limited cases).
In the long term it may compete with the best human writers.
Some humans are too full of themselves, thinking they're so exceptional, while computers are proving time and again that's not so. The evidence is starting us in the face.
The AI produced content I've seen writes like a competent 5th grader. National Geographic wasn't being written by competent 5th graders.
I do think people are going to get tired of AI-produced this and AI-produced that - they're going to crave content created by humans. At least for some things. There may be a place for AI-created content that we love. That is this whole thing might be a false dilemma, it's not AI or human created content, it's AI and human created content. We'll find out which is better at what.
I imagine, after some decades living neck deep in a cesspool of AI generated content, there will be a renaissance of sorts where human produced content will suddenly break through the noise to resonate with some innately human trait that AI can’t figure out.
I imaging that in a not-so-distant future we'll have some "Genuine Human Generated Certification"
It has happened the same way in every areas where you have rampant counterfeit or simply cheaper competition.
As even if LLM can generate greate quality content, it will always be even cheaper to mass produce low quality. And once you cross the Rubicon of not being certified, it is just a matter of time that capitalism/greed will make it a race to the bottom.
AI content today is better than the worst human writers, but only because the worst human writers are so terrible. (Think of you trying to write an essay in a language that you’ve only studied for 3 months bad.)
That doesn’t mean that LLMs are on a course to inevitably surpass the 90th percentile human writer (which is what most full-time writers presumably are). I may be a Luddite here, but I don’t expect that in my young kids’ lifetime.
In my opinion LLM will never be able to replicate the human experience as it's solely based on language which in itself is only a poor facsimile of consciousness.
If writing is our best attempt to share our personal psychic experience then all LLM can be is fragments of those experiences and can not in itself experience the presence of being that is human.
(I'll note you wrote "AI content" not "LLMs", before anyone gets the wrong idea.)
> Some humans are too full of themselves, thinking they're so exceptional, while computers are proving time and again that's not so. The evidence is starting us in the face.
Thanks for daring to state this unpopular opinion. I completely agree that humans have a very poor (and hence inflated) idea of what they are good at, because they have only had animals and (recently) machines to compare themselves to, plus these opinions formed long ago are a very firmly held cultural memory (e.g. literary and movie tropes, religion). Creativity being IMO one of the most badly misclaimed abilities.
You are using an AI to write this? Or do a vast majority of actual humans rely on crappy writing? You do make a good point. I'll shove my next few wikipedia edits down an AIs gullet...
*This was written with an AI, and I told it to be spicy, salty and smarmy.
>In the long term it may compete with the best human writers.
AIs lack connection with real world. This is important if we speak about NatGeo.
>The evidence is starting us in the face.
AIs generate average texts. It is admittedly cool. But I think it just shows how much human drivel is out there.
Funny thing is that people optimized texts for search engines in attempts to hack ranking black boxes. Hence ML AI was the reason for bullshit for quite some time long before LLMs.
The other possible scenario is that people who want to create new content will quickly figure out that the LLMs are basically rewriting their original research and extracting all monetary worth from it, and so stop publishing in any medium that the bots can harvest the information.
So all new discoveries will get walled off from the GPTs and we will be stuck with constantly regurgitated old information unless you go looking in paid for publications.
Absolutely! I chose FPN data as it's the best available to me in near realtime but as you say, it's not reliable.
This is partly why I've been collaborating with someone on a prediction feature that tries to guess generation using realtime wind conditions and historic actual generation data at various wind speeds.
I'm hoping to improve things as time goes on and I learn more about the underlying data (and its quirks).
Yet still jam packed with people