Hacker News new | past | comments | ask | show | jobs | submit login

Our default deployment option is cloud first for both training and inference at the moment, but we have thought about the ability for users to export a trained model. Either exporting the model parameters in some standardised format, or a compiled predict function, or a docker image that encapsulates a full inference service, etc. So if you could use this kind of export within your application, this would allow on-premise inference. This is something we could probably make available pretty quickly if necessary for your use case.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: