Stanford only spent $500 to fine-tune LLAMA for humam instruction with 52k instructions generated by GPT-3. This probably costs less. The use of GPT to generate the instruction data instead of humans is the massive cost reduction. The actual training for fine-tuning on GPUs is relatively cheap.