Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Genomics, protein structure prediction, various forms of small molecule and large molecule drug discovery.




No neural protein structure prediction papers I read have compared transformers to SAT solvers.

As if this approach [1] does not exist.

[1] https://pmc.ncbi.nlm.nih.gov/articles/PMC7197060/


What exactly are you suggesting- that the SAT solver example given in the paper (applied the HP model of protein structure), or an improvement on it, could produce protein structure prediction at the level of AlphaFold?

This seems extremely, extremely unlikely for many reasons. The HP model is a simplification of true protein folding/structure adoption, while AlphaFold (and the open source equivalents) works with real proteins. The SAT approach uses little to no prior knowledge about protein structures, unlike AlphaFold (which has basically memorized and generalized the PDB). To express all the necessary details would likely exceed the capabilities of the best SAT solvers.

(don't get me wrong- SAT and other constraint approaches are powerful tools. But I do not think they are the best approach for protein structure prediction).


(Not the OP) If we've learned one thing from the ascent of neural nets is that you have no idea whether something works or not until you have tried it. And by "tried it" I mean really, really gave it a go as hard as possible, with all resources you can muster. The industry has thrown everything it's got on Transformers, but there are many other approaches that have at least as promising empirical results and much better theoretical support and have not been pursued with the same fervor, so we have no idea how well or bad they'd do against neural nets, if they were given the same treatment.

Like the OP says, it's as if such approaches don't even exist.


Do you understand the relevant fundamental difference between SAT and neural net approaches? One is a machine learning approach, the other is not. We know the computational complexity of SAT solvers, they're fixed algorithms. SAT doesn't learn with more data. It has performance limits and that's the end of the story. BTW, as I mentioned in my other comment, people have been trying SAT solvers in the CASP competition for decades. They got blown away by transformer approach.

Such approaches exist, and they've been found wanting, and no amount of compute is going to improve their performance limits, because it isn't an ML approach with scaling laws.

This is definitely not some unfair conspiracy against SAT, and probably not against the majority of pre-transformer approaches. I am sympathetic to the concern that transformer based research is getting too much attention at the expense of other approaches. However, I'd think the success of transformer makes it more likely than ever that proven-promising alternative approaches would get funding as investors try to beat everyone to the next big thing. See quantum computing funding or funding for way out there ASIC startups.

TL;DR I don't know what is meant by the "same treatment" for SAT solvers. Funding is finite and goes toward promising approaches. If there "at least as promising" approaches, go show clear evidence of that to a VC and I promise you'll get funding.


Hey man, CASP is an open playing field. If it was better, they would've showed it by now.

Said somebody about neural nets in the 1980's.

I don't really understand what point you or parent are trying to make. SAT approaches have been used in CASP, an open competition for protein structure prediction. They have been trying for decades with SAT. The transformer based models blew every approach out of the water to the point of approaching experimental resolution.

Why am supposed to pretend SAT is being treated unfairly or whatever you guys are expounding? Based on your response and the parent's, don't think you'd be happy if SAT approaches WERE cited.

Maybe you and parent think every preexisting approach hasn't been proven to be inferior to the transformer approach until some equivalent amount of compute has been thrown at them compared to the transformer approach? That's the best I can come up with. There is no room for 'scaling' gains with SAT solvers that will be found with more compute, it's not an ML approach. That is, it doesn't learn with more data. If you mean something else more specific I'd be interested to know.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: