Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
noobcoder
on April 4, 2023
|
parent
|
context
|
favorite
| on:
State-of-the-art open-source chatbot, Vicuna-13B, ...
Is it worth it to host this on an EC2 which might take ~1.5$ per hour (on demand) than running GPT3.5 API for this purpose? What is the breakeven number of queries (~2000 tokens/query) to justify the hosting of such model?
Consider applying for YC's Spring batch! Applications are open till Feb 11.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: