Hacker News new | past | comments | ask | show | jobs | submit login

Systems research papers do not represent all research papers out there, not even in computer science.

In cryptography, certainly a paper with formal definitions and proofs can be much more valuable than a corresponding blog post. It's a field where formalism is desired, if not necessary. Otherwise you can't check other people's "proofs", or even know what model you're working in.

I think, since people haven't come up with better formalisms, sometimes it's quite obtuse, which gets mistaken as "academic writing", when really it's a best effort to formalize.




Requiring formalism does not preclude attaching an informal but intuitional description of the formal definition or proof. Unless the authors don't understand very clearly what they are talking about, or they want to prevent others from understanding their concepts too easily, I don't see why there is a reason for the authors not to attach an EIL5 in addition to formalism.


Sure. But it's an ELI5 "in addition to formalism", not "in lieu of formalism". In theory conferences like STOC or FOCS, the first section of the paper often comprises such an overview.

Certainly some papers are better written than others. But sometimes a blog post cannot replace a paper, unless it also goes into the depth and detail that formalism requires. (Then it becomes a 30 page blog post, where most people don't read past the intro.)


The complaint about research papers is that almost all of them omit the ELI5 and provide only the formalism.

You can have both and weave them together into a digestible narrative. I see Physics textbooks sometimes written this way.


Papers are mostly read by other researchers, where the added background is actively bad because it obscures the real meat of the paper to the main audience.

If you just wanted a digestible intro then you would usually buy a textbook.

I think the argument that every research paper ought to be a mashup of a textbook + the actual research to be a bit silly from a “people should specialize at what they’re good at” standpoint.

Put in another context, I also don’t want every recipe to reintroduce what it means to “fry” or “braise” or “marinate”. We have Google for that.


I've long wanted an informational slider to bits of text. Something where you can zoom in and out to the level of desired complexity. LLM's might be able to fill in some of those gaps. You could turn any paper into a introduction of the subject it's a part of.


This sounds like a good use case for local llm models. Browser plugin, precooked prompts for different levels of detail, maybe a lora to give the model some idea of expected output. I bet some of the 13b models could do a useful job on this even if they were imperfect.


Look for "stretchtext"


I don't know that much about AI, but my experience in other areas has shown me that 'more grown up' literature that feels harder to parse when your starting out later becomes the precise technical information you need as you get deeper into a subject. Like W3Schools when you start out in web dev vs MDN when you're skills are more mature.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: