Hacker News new | past | comments | ask | show | jobs | submit login

I'm on the team at Sapling Intelligence, a deep-learning AI Writing Assistant. A lot of privacy and security conscious folks don't like the idea of a keylogger, so we have self-hosted/on-premise/cloud-premise options for businesses. We have a list of available offerings here: https://sapling.ai/comparison/onprem. Sapling deployments can also be configured for no data retention, sacrificing some model customization.

Cost-wise, it doesn't make sense for individuals to host a neural-network based grammar checker, though some of the rule-based options may work. There's a future where if we can maintain some sort of Moore's law scaling we will be able to run these language models on individual computers as opposed to the cloud.




> Cost-wise, it doesn't make sense for individuals to host a neural-network based grammar checker

Why?

We already do the same in the photography world in the form of apps like Topaz Labs' denoise/sharpen/gigapixel, as well as video enhance. Why would I care how many gigs of disk space and even a GPU might be required for an NN grammar checker if it literally makes back the money by improving the writing that influences my career? Hell, I can expense what is needed to run this if the payoff to my company is "the quality of work is better, and more secure".


Well, I think you'd first have to know the resource requirements, and it's reasonable that so few people would be willing/capable of running it that it doesn't make much business sense to focus on that as an option.

I'm certainly curious to know.


I'm not an expert and would appreciate being corrected if I'm wrong, but I'm under the impression using a neural network after it has been trained typically requires relatively little computation and data. It's training it that takes the big compute and requires lots of data.

I think many services like Grammarly would be perfectly possible to implement without sending your data off device. There are just massive incentives not to.


Not really, that's an assumption based on a misunderstanding about how NN actually work.

While the training (e.g. the part necessary in order to even have a product) is incredibly resource intensive, and will yield a model that's typically hundreds of megabytes, actually applying the resultant model to new data takes milliseconds at most for plain text analysis.


fixed that for you:

A neural-network based grammar checker doesn't make sense.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: