Hacker News new | past | comments | ask | show | jobs | submit login

This is great! I love seeing how rapidly in the past 6 months these ideas are evolving. I've been internally calling these systems "prompt machines". I'm a strong believer that chaining together language model prompts is core to extracting real, and reproducible value from language models. I sometimes even wonder if systems like this are the path to AGI as well, and spent a full month 'stuck' on that hypothesis in October.

Specific to prompt-chaining: I've spent a lot of time ideating about where "prompts live" (are they best as API endpoint, as cli programs, as machines with internal state, treated as a single 'assembly instruction' -- where do "prompts" live naturally) and eventually decided on them being the most synonymous with functions (and api endpoints via the RPC concept)

mental model I've developed (sharing in case it resonates with anyone else)

a "chain" is `a = 'text'; b = p1(a); c = p2(b)` where p1 and p2 are LLM prompts.

What comes next (in my opinion) is other programming constructs: loops, conditionals, variables (memory), etc. (I think LangChain represents some of these concepts as their "areas" -> chain (function chaining), agents (loops), memory (variables))

To offer this code-style interface on top of LLMs, I made something similar to LangChain, but scoped what i made to only focus on the bare functional interface and the concept of a "prompt function", and leave the power of the "execution flow" up to the language interpreter itself (in this case python) so the user can make anything with it.

https://github.com/approximatelabs/lambdaprompt

I've had so much fun recently just playing with prompt chaining in general, it feels like the "new toy" in the AI space (orders of magnitude more fun than dall-e or chat-gpt for me). (I built sketch (posted the other day on HN) based on lambdaprompt)

My favorites have been things to test the inherent behaviors of language models using iterated prompts. I spent some time looking for "fractal" like behavior inside the functions, hoping that if I got the right starting point, an iterated function would avoid fixed points --> this has eluded me so far, so if anyone finds non-fixed points in LLMs, please let me know!

I'm a believer that the "next revolution" in machine-written code and behavior from LLMs will come when someone can tame LLM prompting to self-write prompt chains themselves (whether that is on lambdaprompt, langchain, or something else!)

All in all, I'm super hyped about LangChain, love the space they are in and the rapid attention they are getting~




LambdaPrompt looks really cool. Love the Pythonic expression as good ol' functions, which makes the code read very naturally.

What applications were you envisioning when you built it?


In terms of applications with it, I have made things like sketch: https://github.com/approximatelabs/sketch

Raw prompt-structure ideas i've worked with:

- Iterate on a prompt with another "discriminator" prompt, that determines the result is good / safe

- Write N-trials of an answer, then use another prompt to select the best answer

- When doing code writing (SQL or Pandas) write the output, then use a parser (eg. `ast` in python) to validate code is valid, if not, feed back into a prompt for fixes

- Logical negation checks (check if X, and if ~X, give opposite answers, then it's likely consistent, if it's both "affirmative" (as the models tend to bias towards), then it's definitely hallucinating)

Other 'product' ideas i've tried:

- A chat style interface (I made a chat-bot last year, similar to chatGPT)

- A "google-this-for-me" style chain, that checks google, summarizes multiple results, then synthesizes a final result

Ideas I've been sitting on, that I think would be fun to prototype:

- An iterative "large document" editor: storing global intent, instructions, outline, and the raw text, and each iteration of the prompt works on the these objects to build a large document.

- A "research this topic for me", similar to the above, but include the google searching, summarizing, and such

- A code-repository "AI agent" that takes `Issues` and `Pull requests` as input, and writes and edits code for you, and by adding feedback in github, it uses that to modify the branch and act as a developer. (Code via github interface, rather than an IDE)


Thanks for sharing your ideas! With respect to the research this topic for me concept, I stumbled upon this repo:

https://github.com/daveshap/LiteratureReviewBot

It does something similar, but uses the ArXiv dataset to search PDFs instead of the internet.


This resonates with me. I've been slowly working on similar project, instead of python functions I'm doing api creation. I have a website that allows you to create templated apis for each prompt.

The next step for me is the workflow composition part. Instead of a functional model of python functions Im going to try to compose workflows with AWS step functions where each step function calls a particular one of the templated api endpoints.

I'm excited to see your progress.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: