Lots of applied NLP tasks used to require paying annotators to compile a golden dataset and then train an efficient model on the dataset.
Now, if cost is little concern you can use zero shot prompting on an inefficient model. If cost is a concern, you can use GPT4 to create your golden dataset way faster and cheaper than human annotations, and then train your more efficient model.
Some example NLP tasks could be classifiers, sentiment, extracting data from documents. But I’d be curious which areas of NLP __weren’t__ disrupted by LLMs.
> But I’d be curious which areas of NLP __weren’t__ disrupted by LLMs
Essentially come up with a potent generic model using human feedback, label and annotation for LLM e.g GPT 4, then use it to generate golden dataset for other new models without human in the loop, very innovative indeed.
Now, if cost is little concern you can use zero shot prompting on an inefficient model. If cost is a concern, you can use GPT4 to create your golden dataset way faster and cheaper than human annotations, and then train your more efficient model.
Some example NLP tasks could be classifiers, sentiment, extracting data from documents. But I’d be curious which areas of NLP __weren’t__ disrupted by LLMs.