Hacker News new | past | comments | ask | show | jobs | submit | atomicnature's comments login

Try finding PhD Students and see if you can help with their programming or experiments. You'll learn more about how to do research from a PhD student in months, rather than struggling on your own. Once you have the "learning to learn" chops, you're free to study anything you want at incredible depth.

At the mean time I have approached few faculty members, there doesn't seem to be an opportunity like that unfortunately. i'll continue looking

It's amazing.

I've wanted a small prompt-manager chrome extension for a while.

Was procrastinating.

Was able to build one for myself with Firebase studio in 30 mins.

Here's my PromptPal - built in just 30m (disable ad-blocker to avoid issues - there's some interference for some reason):

https://9000-idx-studio-1744253706406.cluster-fkltigo73ncaix...

No frustration whatsoever

Their prototyper is awesome

And code mode also great

I was able to push to github as well with no problems. And the tool generates nice commits for every single change one makes.


Go's CSP model works well, if you first take the time to study it up a bit. The only drawback with the docs is they don't focus on correctness and potential concurrency bugs sufficiently. In the very least the stdlib could mark whether particular functions are goroutine safe or not. If you stay guarded for such things, you should be able to get a lot done with very few mistakes.

I have limited experience with Rust, but the ownership model and the borrow checker help with avoiding concurrency bugs as well. And personally for me, Rust slowed down the speed at which I could proceed solving the problem at hand. If you have time on your hands or you're very fluent with it, rust may give better results.


LLMs are usually used to describe goals or to provide feedback (correction or encouragement) towards its implementation.

Programming is about iteratively expressing a path towards satisfying said goals.

What LLMs are doing now is converting "requirements" into "formalizations".

I don't think Djikstra is wrong in saying - that performing programming in plain-language is a pretty weird idea.

We want to concretize ideas in formalisms. But that's not what any human (including Djikstra) starts with... you start with some sort of goal, some sort of need and requirements.

LLMs merely reduce the time/effort required to go from goals -> formalism.

TLDR: Requirements != Implementation


If I may ask - how are humans in general different? Very few of us invent new ideas of significance - correct?


> If I may ask - how are humans in general different? Very few of us invent new ideas of significance - correct?

Firstly, "very few" still means "a large number of" considering how many of us there are.

Compared to "zero" for LLMs, that's a pretty significant difference.

Secondly, humans have a much larger context window, and it is not clear how LLMs in their current incarnation can catch up.

Thirdly, maybe more of us invent new ideas of significance that the world will just never know. How will you be able to tell if some plumber deep in West Africa comes up with a better way to seal pipes at joins? From what I've seen of people, this sort of "do trivial thing in a new way" happens all the time.


Not only "our context window" is larger but we can add and remove from it on-the-fly, or rely on somebody else who, for that very specific problem, has a far better informed "context window", that BTW they're adding to/removing from on-the-fly as well.


I think if we fully understood this (both what exactly ishuman conciousness and how llm differs - not just experimentally but theoretically) we would then be able to truly create human-AI


Design must flow from customer demand/desires.

And 90% of design is just "correctly assigning priority" to elements and actions.

If you know what is important (and what is less important) you use...

- white space (more whitespacce = more important)

- dimension (larger = more important)

- contrast (higher = more distinct)

- color (brighter = more important)

... to practically implement the decided priority.

How to validate you have implemented priority correctly?

Just ask a few people what do they see first, second, third, etc in a page.

If you designed it right - their eyes will see things exactly in the order you expected them to.

In short - "design is guiding user's senses in the most prioritized manner to the user in achieving their goals"

In our startup - we call this the "PNDCC" system (priority, negative space, dimension, contrast, color).

There are a few more tricks to make it even more powerful - but as I said - just getting these right puts you in the top 10%


In software - rarely do you know beforehand that you're building a chair.

Even when you're building in an existing category of products (say, databases), there are still lots of variations and trial and error needed to get "what we need" defined clearly.

I'd say "vibe coding" is just a new name for "exploratory programming" or good old "prototyping"


Software was telling computers what to do, to get those things done that we wanted done. This still remains software.

With AI, the new evolution in software is to skip the "telling what to do" part and directly specifying the goal - "telling what to get done" and let the system figure it out.


>With AI, the new evolution in software is to skip the "telling what to do" part and directly specifying the goal - "telling what to get done" and let the system figure it out.

Not exactly. Writing software has always been "telling computers what to do". Using AI to do this doesn't change much. In the very early days (after CPUs were invented), you told the computer what you wanted to do with CPU instructions directly, so you had to figure out how to manage memory and all other tedious things needed to get your program to work. Later, higher-level languages were developed to abstract the complexity of the hardware away, so you could tell the computer what you wanted in higher-level terms without so many low-level details. An AI tool just takes this to another level.


OK - how many practical declarative goal-based languages have been successful in the past 50 years? Only option I can think of till now is SQL. Maybe there were some Operational Research optimization stuff as well. All of these were quite limited in terms of the kinds of goals we could specify, and the range of impacts they could have on the world.

The possibility of pure goal-oriented systems are at an inception now. So it's a totally different game, different mechanisms, and so on.


"Short cuts make long delays." --somewhere in LOTR

Thinking the user of AI can just spec a goal and then the AI automagically produces perfect results is just the pipe dream of the lazy who won't ever have a clue about how to verify the resulting system, much less the system that produced it. Good luck with that!

Software is about 100% correctness, which is damn-near impossible. And when it fails, it will fail at the worst time and in the worst way possible. It's just a matter of when, not if. We can't even remotely do that (for nontrivial systems), but we're gonna make a machine that can do it?!

Shhheeeeeiiiitttttt.


Don't have the time to make a long rebuttal, but my points are:

- For the past 50+ years, really intelligent people, including Turing award winners have believed in this dreams - In fact, getting computer to solve problem on their own is the more intense path IMHO. The lazy path is sitting content with the way things are.


Yeah, but heating an overheating world in the pipe dream that a 95% accurate black-box solution is good enough is pure madness.


No surprise that Galileo made enemies while some of his thoughts on math are spot on, he really underestimated Sersi throughout the work, who himself was a math professor.

On the nature of comets - looks like Sersi was nearer to the Truth compared to Galileo

Galileo was definitely a hostile/belligerent character.

Plus, a lot of his claims turned out to be pretty wrong, yet he stated them with such absolute confidence, which didn’t help his case at all.

In comparison people like Da Vinci, Newton were more politically/socially savvy. They knew how/when/where to communicate (relatively speaking)


off-topic: Love the username :)


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: