Granger's causality is a very restrictive and incomplete view of causality. Pearl's counterfactual system with do calculus is a more general way to think about causality. This SURD appears to be a souped up version of Granger.
And the potential outcomes framework (Neyman-Rubin) is even more general :)
Either way, Holland's 'Statistics and Causal Inference' paper (1986) is a nice read on the different frameworks for causality, especially in regards to Granger (&friends) versus do-calculus/Neyman-Rubin.
I have not found an employer that values skills in multiple languages + quick at learning new ones. I assume these are the basic req for your type of work. Please share if you can how you got started .
At first we searched on twitter/linkedin for companies that seem to get into trouble more than once; hacked, downtime etc. Also down adsense links of big companies (click on the link -> they pay -> result page is 404, 500, or just dead); I have found these for Pepsi, Nike etc and then send them our proposal. When we started, the hourly was 150E/hr and one of the first we emailed (in London this was) came back and asked if we do entire projects as 150/hr was less than he paid the current people.
I will say that less is more and "The Bitter Lesson" applies here. Chasing biologically-inspired rabbits, such as STDP and LIF (see paper above/wikipedia), does seem to be a waste of time, especially when we have this thing entirely outside of biology that can arbitrarily replicate, serialize, mutate and simulate billions of instances of the same parent candidate in minutes-hours.
Leaky charge carriers and inability to persist learned weights between candidate instantiations are limitations, not features to emulate. Imagine if you could be reincarnated with all of the exact knowledge you have today. Then clone that 1000 times and apply subtle mutations to it. Then, put these clones in an FFA arena and see who wins based upon a very well crafted fitness function. Then, do all of that over and over thousands of times per hour.
>Chasing biologically-inspired rabbits, such as STDP and LIF (see paper above/wikipedia), does seem to be a waste of time
Unless physiological compatibility with a biological brain is considered an interesting endpoint? If you think about Neuralink for example, wouldn’t it be interesting if our brains could directly engage the model? Not just a translation layer but perceive it directly and natively though some kind of synaptic modem that converts analog exchanges of neurotransmitters to the synthetic network in the digital domain.
> Unless physiological compatibility with a biological brain is considered an interesting endpoint?
Excellent point. I am focused entirely on synthetic simulations without biological integration (for now).
I think that could be an interesting next step - determining a way to modify a SNN that was trained in a synthetic time domain such that it can process real time signals. Training these things online is not really an option. You have to run a large # of offline simulations before you can find the right networks. Learning rules like STDP could theoretically address the online learning problem, but I couldn't find any traction down that path yet.
I had heard about spiking neural networks but I didn't really think about them until your post here. Lately I've been kind of deep diving into biological neurons (just a lay perspective) and what we know about their mechanism of action at the individual cell level. I'd also just watched the 8hr episode of Lex Fridman's podcast with the folks from Neuralink, so my brain was primed to make this connection and holy moly it's exciting to think about the possibilities.
Thanks for sharing your work, would love to see a post on here down the road when you've found some light in the dark forest.
> Lately I've been kind of deep diving into biological neurons (just a lay perspective) and what we know about their mechanism of action at the individual cell level
I've been doing the same for a long time, it's a really fun and interesting topic. I really like reading about each new discovery about something that is computationally important, people don't realize how complex a neuron is and that the scientists don't have the full picture yet.
If you haven't looked into it, check out how astrocytes are involved in computation (e.g. visual processing), their internal calcium wave signaling and their control of neurons.
> Then clone that 1000 times and apply subtle mutations to it. Then, put these clones in an FFA arena and see who wins based upon a very well crafted fitness function. Then, do all of that over and over thousands of times per hour.
I spent many years on an ALife+ANN simulation that did this. For each new generation, I kept the top X% unchanged, the next Y% were changed a little, next Z% changed a lot, etc.
It was pretty fun and I wish I had the time+money to continue on a larger scale.
Mindstorms is an example of what did not work. I want to provide an example of what does. BBC microbits. It has a visual programming interface that is translatable to python or JavaScript .
I wouldn’t immediately assume malicious intent. Most likely a solo devs hobby project and they didn’t bother with a privacy policy. Nonetheless skepticism is warranted.