I'm curious how they deal with probabilities very close to 1 or 0. Usually when people are doing bayesian things with probabilities they work in logistic space so that the precision of values close to 1 or 0 is effectively unbounded. That seems like a hard thing to do with an analog circuit.
The founder's thesis mentions that they use a linearizer in their analog circuit, but all that does is give the same precision over the entire logical value range from 0 to 1 (by that I mean the same amount of voltage swing equals the same amount of "logic value change" anywhere in the range).
I suppose they could use a "non-linearizer" to put more of the precision near 0 and 1, but it would come at the expense of precision in the middle. The less voltage swing is involved, the more susceptible you are to noise from various sources.
Is it likely (in the future) to see more domain specific chips? Something like what http://www.deshawresearch.com/ has created---a custom chip Anton, optimised for Molecular Dynamics simulations.
I think it is likely we will see more of these. But this probability chip sounds more like an analog computer than what D. E. Shaw has done.
The Lyric web site says that they "model relationships between probabilities natively in the device physics", where D. E. Shaw's Anton chip sounds like it uses traditional logic gates the same way a GPU does.
P.S. Sorry, I downvoted you by accident -- I meant to upvote you.
Domain-specific chips is a cyclical trend. They come and go; at some times they have advantages, and at others they don't. (Remember Lisp machines? Good initially but vastly outperformed by the end of their lifespan.) See for example the classic 'wheel of reincarnation' paper on graphics: http://cva.stanford.edu/classes/cs99s/papers/myer-sutherland...
The fundamental problem as I see it is that any domain-specific chip will receive a tiny fraction of R&D and economies of scale and amortization that a general purpose one will, and so its advantage is only temporary. As long as Moore's law is operating, this will be true.
> In practice replacing digital computers with an alternative computing paradigm is a risky proposition. Alternative computing architectures, such as parallel digital computers have not tended to be commercially viable, because Moore’s Law has consistently enabled conventional von Neumann architectures to render alternatives unnecessary. Besides Moore’s Law, digital computing also benefits from mature tools and expertise for optimizing performance at all levels of the system: process technology, fundamental circuits, layout and algorithms. Many engineers are simultaneously working to improve every aspect of digital technology, while alternative technologies like analog computing do not have the same kind of industry juggernaut pushing
them forward.
You're already seeing domain specific chips, but they're in the form of a FPGA rather than an ASIC. If it can be implementation with traditional gates, a FPGA is the way to go for low to medium volume.
While Lyric may incorporate classic gates in their design, it also sounds like the heart of their technology uses something different from classic gates.
Sure, if you can represent your problem using probabilities :)
That said, I'm more excited about the use of Lyric's technology in ECC memory. I'm skimming through Vigoda's thesis, and it seems that another very interesting application ought to be making even lower-power mobile backend chips.
That's a turbo decoder rather than a generic probability calculator, but it's doing probability calculations in the analog domain.
This sort of thing may make sense for error correction, but I don't think people will run general probability calculations on it. Too difficult to debug :-)
Though, I do wonder if they can simulate a neuron more efficiently than digital logic.
http://phm.cba.mit.edu/theses/03.07.vigoda.pdf
edit: p 135 is where he starts talking about implementation in silicon