I'm neck deep in this ChatGPT stuff right now and build 1-2 apps a week, so a bit biased.
Your presumed target audience is someone who does not know their way around prompt-based LLMs
For this person, neither the problem, nor the solution space are defined clearly enough.
For example:
- Not enough pre-defined selectors, too much "define yourself".
- The meaning of the selectors that you give is opaque. (E.g., how does 'you will Detect' help the user?)
- Result: The impact of the choices on the output becomes unclear, the tool becomes a chicken-and-egg problem. (It says it helps you to understand the system, but you need to understand the system to use it effectively)
With the above, it's almost easier to ask ChatGPT to generate an effective prompt for you...
I assumed the target audience knows a bit about prompt-based LLMs and could use some guidance. If that's the case, I think this serves as an excellent and straightforward framework for leveling up their skills.
> I'm neck deep in this ChatGPT stuff right now and build 1-2 apps a week, so a bit biased.
As someone who never worked on a project that wasn't at least a couple months long, I'm curious about the kind of apps that can be built in half a week. Do you have links to share?
> As someone who never worked on a project that wasn't at least a couple months long, I'm curious about the kind of apps that can be built in half a week. Do you have links to share?
The last one was a prototype for an internal SEO improvement tool, which will (hopefully) be used by a marketing agency to more effectively manage client sites. Think: fixing alt attributes, links, meta tags etc. App is too big of a word for that. But it might turn into a Shopify/Wordpress plugin someday. Also built a Telegram bot for my parents last week which helps with various day-to-day tasks (they're elderly and live in a foreign country).
Having said that, here are two crappy technology demonstrators I built in the last 4 days with tools I have not been familiar with a week before (flask + mongodb):
(Please don't murder me, I know it's utter crap in the grand scheme of things — they're mostly there to demonstrate an approach to solve a specific problem.)
Why this has been an absolute rocket ship in terms of learning: I use ChatGPT to extremely quickly generate boilerplate code and debug things in languages I'm not familiar with. (e.g. "What does WSGI want from me agin?")
The benefit, at least for me, is: You are learning while doing a (more or less) useful hands-on project instead of answering crappy disjunct multiple choice questions for some artificial test.
And ChatGPT is my hyperindividualised pair programmer & slightly amnesic teacher.
YES!!! I've been experimenting with that on my local machine... As the Q&A repo grows larger, it's sometimes scarily good, sometimes utterly horrendous.
Your presumed target audience is someone who does not know their way around prompt-based LLMs
For this person, neither the problem, nor the solution space are defined clearly enough.
For example:
- Not enough pre-defined selectors, too much "define yourself".
- The meaning of the selectors that you give is opaque. (E.g., how does 'you will Detect' help the user?)
- Result: The impact of the choices on the output becomes unclear, the tool becomes a chicken-and-egg problem. (It says it helps you to understand the system, but you need to understand the system to use it effectively)
With the above, it's almost easier to ask ChatGPT to generate an effective prompt for you...