Hacker News new | past | comments | ask | show | jobs | submit login
Launch HN: Snark AI (YC S18) – Distributed Low-Cost GPUs for Deep Learning
122 points by davidbuniat on July 9, 2018 | hide | past | favorite | 66 comments
Hi HN,

We are Sergiy, Davit and Jason, founders of Snark AI (https://snark.ai). We provide low-cost GPUs for Deep Learning training and deployment on semi-decentralized servers.

We started Snark AI during our PhD programs at Princeton University. As deep learning researchers we always experienced lack of GPU resources. Renting out GPUs on the cloud didn't fit in our budget, and purchasing GPU cards was difficult -- at that time, so many GPUs were being taken away by the crypto-miners. Then we found out that GPU mining profits lag far behind public cloud GPU prices.

On top of that, we figured out that there's a way to run Neural Network inference and crypto-mining simultaneously without hurting mining hash rate. This observation is a little counterintuitive, but it turns out that anti-asic hashing algorithms are designed to be extremely memory intensive, which leaves a good chunk of the CUDA cores idle. We can utilize the leftover compute power to run Neural Network inference extremely cost efficiently, which could be a life savior for large-scale inference tasks. http://snark.ai/blog

At the same time, we provide low cost raw hardware access for Neural Network training. We aim to be up to 10 times cheaper than on-demand instances on public cloud, undercutting preempteble/spot instance by up to 2x. When the GPU is idle our algorithms efficiently switch to mining to reduce costs. Try it out at https://lab.snark.ai, with 10 hours of free GPU time. We made it very simple to access the hardware through a single command line after `pip3 install snark`. More information on usage here https://github.com/snarkai/snark-doc. We are also working on creating a hub for NNs, similar to docker hub. It is still work in progress but you can take a look at couple examples at https://hub.snark.ai/explore.

We would love to get your feedback, to understand how was the experience for training Deep Networks through our platform and then deploying.




If you guys are searching for potential early customer, maybe you can hang around on Fast.ai course forum[0]. There are a lot of people who are just trying DL for the first time, like me, and are looking for the cheapest way possible to get started experimenting. One of your selling point you said is the cheaper price, so maybe it would work out well.

http://forums.fast.ai/


Good suggestion! Our cheapest instances are actually perfect for learning ML/DL. We are also working on adding Jupyter support so it will be even easier. What do you think?


Yup. Out of the box support for Jupyter would be a great addition, given the environment is the same as what the user requires (in case of fast.ai course the environment should be similar to the course's required environment).

You could also try to reach out to Jeremy or Rachel, see if they can help with anything. They sometimes hang around here on HN too.


Please stop marketing fast.ai on hacker news


Why, it’s really good? I was going to make the same suggestion, but to compete with Paperspace they should offer an image that comes with all the course dependencies preinstalled.


Thanks for the suggestion! Snark AI will offer a pod type with all fast.ai course dependencies installed and easy jupyter notebook access.


Please elaborate.

AFAIK they promote their Pytorch wrapper in their course instead of using pure Pytorch. Anything else?


"3-5 times cheaper than public cloud"... That's very vague. What's the actual minimum price per hour?


Thanks for asking, it is cheaper than preemptible instance of K80 0.135$/h on Google with an equivalent GPU of P106 0.095$/h on Snark.


That's definitely not 3-5 times cheaper.


I think snark isn't pre-emptible, they're just giving the best possible price on GCE (steelman argument, which is impressive), but it is a bit confusing.

From https://cloud.google.com/compute/pricing it looks like the non-pre-emptible K80 pricing is $0.45 USD per GPU.

Can someone from Snark correct me if I'm wrong?


Yep, totally agree!


So what is the pricing? The website has a login-wall to access the pricing.


I'm sure there is a practical reason why companies do this, but it drives me crazy and makes me want to ignore what could potentially be a really cool idea. Please make the pricing transparent and viewable without a registration.


yeah you are right @Fede_V, sorry about this, just made it public https://lab.snark.ai/pricing


NVIDIA GTX1080 - 0.25 / hour NVIDIA GTX1070 - 0.2 / hour NVIDIA P106-100 - 0.095 / hour

(All values in $ I assume)


I hope not, or it is an epic fail

0.25x24x30.5=183

For 100 Eur, or about $120, you can get a 1080 inside A DEDICATED SERVER (!!) at Hetzner: https://www.hetzner.com/dedicated-rootserver/ex51-ssd-gpu?co...

I guess I have a business idea then: charge 0.5/h to rent the 1080, pocket the $63 difference per month and call it profit, undercutting Snark by 33% without even doing any crypto mining or anything on the side.

Or, just call it step 1 for mega profits! Step 2: resell the CPU computing power, step 3: resell the SSD storage space, step 4: resell the bandwidth, etc. (not sure you can resell the unused RAM, but that's another "innovative business" waiting to happen!!)

Compared to renting servers and properly configuring them (anycast, geoip, etc) I often fail to see the value that "distributed" or "cloud" offer provide besides fast scalability.

It is nice to be able to put 4x more GPUs online in a few hours instead of a day, but I am not sure it commands a 33% premium except in very specific marginal cases.


> It is nice to be able to put 4x more GPUs online in a few hours instead of a day, but I am not sure it commands a 33% premium except in very specific marginal cases.

I suspect that, in general, it commands an even higher price premium, as irrational as that may seem (or actually be).

This particular business is based off the notion of fixed budgets (and, presumably, short time limits), which means that, no matter how much cheaper over all it is to rent resources for a full month, it's worth the premium to rent as much as you can for a result in 10 days.

This reasonining can apply to an early startup, too (in terms of time-to-market and unpredictable scalability), but it becomes actually-irrational when it's not re-evaluated and a huge premium [1] is being paid for routine, easily-predictable infrastructure sourced from cloud providers.

[1] hundreds of percent, i.e. multiples


Indeed, it seems to be a nice business niche: charge a large premium for on-demand services, just hoping that clients are actively irrational in not re-evaluating their needs. Basically, you are betting they are too lazy to be moving over to a better baseline offer. Better: you can encourage this lazyness with lock-in, like Amazon does!!

That being said, cloud hosting is also funny in a different way: it reminds me of the late 90s, when you had to declare your hostname and use a ftp account to upload your files to your host.


Yeah, We present prices on hourly basis and you can spin up pods in matter of seconds. Actually we will try to reach them and see if partnership can make our services even cheaper, thanks for pointing this out.


You should try reaching to them, but considering they have the expertise, that they own their infrastructure, and that they are also rolling out their own cloud offering, I am not really sure what may be their benefit in cooperating with you to alter their base offering.

I do not mean that in a bad way, just in a logical way.

I mean, they are already making a profit at $120/month, so I guess their response to you will be "sure, buy as many servers as you want, price is $120/each". You will be back to square one, trying to sell your hourly services to scientists who need GPUs.

Then the initial problem remain: anyone with a baseline demand for GPUs is better off renting them at hetzner. They can use you for small loads, or unanticipated needs. But then it will be for a short time, before they opt for a monthly rent.

Even then, for this peak demand, you compete straight on against google cloud and aws. You certainly undercut their ridiculous prices, but it is not clear to me how better off I am chosing you compared to hetzner+any other cloud offering.

I am just talking as a prospective client (I often need GPUs!) who fails to see what's unique or interesting in your offer. And if you know less about your competition than your prospective clients, I see that as a bad sign: your offer may not be priced right.

Maybe I am wrong, and you are just aiming for a different kind of clients, with a time-sensitive but less elastic demand, yet not as inelastic as someone who will pay top $ for google or aws? Feel free to explain me if there is business secret at risk here.

Good luck anyway!


Those are all good points! We are trying to build a marketplace for large GPU providers to plugin their hardware. This will drive GPU costs to a market price.

We are experimenting with pricing and if you want to rent for a whole month our price will be cheaper than that, email us. Just thinking about efficient utilization, you might end up paying less if you don't have 24/7h jobs running.

At the same time, we are building software stack to utilize these hardware efficiently for Deep Learning applications. We need those resources for offering higher level ML products.


I need things running 24/7 unfortunately. But I won't pretend I know the rest of the market. I just know my small part, where I don't see a good fit.

Anyway, the more competition the better, and I'm sure you will find a place!


wow, thanks, last time I checked (couple of years ago) they had close to none GPU offerings


Wasn't that true, in general, for cloud and VPS providers?

2 years is a Moore's Law doubling, which I've found tends to mean hardware offerings will be different. They're not necessarily dramatically different, if there's no new/unmet market demand, but this was a noteworthy enough one that I was (and still am) touting it as an advantage of own-hardware over cloud infrastructure.


Thanks for raising this, we will put the prices public soon.


[UPDATE] prices are public https://lab.snark.ai/pricing


We might be an early customer at SerpApi. [1]

In your pricing, you say you are selling, for example, P106 at 0.095$/h, but in your explanation, you are saying you are using idle cycle to mine cryptos (or the reverse, idle cycle to process ML tasks). When I rent a P106, do I have full access to the cores or just partial?

[1] https://serpapi.com


Great! Once you rent a P106 you have full access to the GPU, and we don't run mining or ML tasks.

If you want to deploy large-scale computation and significantly reduce your costs, we can help by running mining at the same time under your consent. This only applies to Deep Learning inference.


> we figured out that there's a way to run Neural Network inference and crypto-mining simultaneously without hurting mining hash rate

I understand that you figured out the technical side of things but curious about the human nature side of things. Let's say currently you can have both NN and mining running. From a cryptocurrency price point of view two things can happen:

a. Prices go up - Wont the incentive change towards mining? How do you guys plan to handle such situations?

b. Prices go down - In this case, the intuitive thing to happen is that GPU pricing to get cheaper. But then, will you guys passing on such benefits to your customers? Because the whole point is to be able to make a steady income from the GPUs, mining or not.


Lets agree that if the mining price goes up, it will be always cheaper than public cloud. Otherwise, you would have run mining on AWS/GCP and be profitable. In this scenario, we are still able to offer cheaper price to the customers and higher price to the miners even by just binary switching. Assuming the gap is not negligible.

If the price goes down, customers will be able to set lower price. As long as GPU holders profit margin is high enough given electricity and maintenance costs, they will do the compute.

If the marketplace matures, pricing of mining, deep learning, rendering and other tasks will be driven by the market. At Snark AI we are working on towards creating this marketplace that will provide optimal benefits to all parties.


> Lets agree that if the mining price goes up, it will be always cheaper than public cloud

I am confused. Let's say your price is $x while public cloud is $x+y. When cryptocurrency price rise the incentive is to increase your price to $x+y, if that is the profit point. In which case you will no longer be cheaper than public cloud. Any differences will be negligible. So, I am not sure you can claim to be always cheaper than public cloud.


Following your notation, consider our new price will be $x+y', where y' is the mining price difference.

You are right my claim that y'<y is slightly weak (was based on "gap is not negligible" assumption, see below).

"gap is not negligible" - means if y' gets near to y, then y will get even higher and there will be always a market gap, which I think you disagree with.

Based on your suggestion, on extreme scenario I would soften my claim to y'=<y, without us making profit. :)


We aim to have the customers easily switch between snark and mining. Given that the rewards for mining on platforms such as Bitcoin half every 4 years, the mining incentives go down and tips become the popular form of earning. The evolution of this is not well understood but tips might not exceed the average size of transactions, of which we're on the better side.


Funny, I wanted to take a look at snark.ai to see if it can be used to run WebGL inside a browser.

For some reason, the snark.ai homepage brings my laptop to its knees though. Do I need a GPU cluster to see it?

Anyhow: Can I use snark.ai to run WebGL in a browser?


Haha, jokingly we use GPUs on our landing page to rent out to others. :D On a side note, animations are slightly tough, need to optimize them.

Regarding WebGL, actually that is an interesting point, would like to know more about the use case.


WebGL: I mean can I install a desktop environment on your machines, start a browser in it and then WebGL will be fast because it has access to a fast GPU?


@TekMol, we can give a try together to see if the streaming speed will be enough for running your WebGL application smoothly.


Reading through your blog entry: "When the data to be processed is sensitive, Snark Infer will dispatch the task only to Privacy Shield Verified GPU providers." - can you shed some light as to a) what a privacy shield verified GPU provider is? b) how data sensitivity is specified or determined? (I did not see either mentioned in the docs yet)


Great you asked this. We try to say that for privacy sensitive data processing (e.g. face recognition), we are eager to work closely with clients and hardware providers to ensure security and compliance to regulations such as Privacy Shield, GDPR, etc. Do you have a such use case, would be great to get in touch with.


Are you planning to add cards manufactured for ML, such as the NVidia Titan V (110 teraflops)?


Vectordash has some : https://vectordash.com/pricing/


yes, we are currently focusing on large-scale compute, but we are planning to add more variety for hardware.


Hope you're not planning on putting any of those fancy Titan Vs in a Datacenter! Rumor has it that ever since Baidu installed 100,000 GeForce GPUs in a Datacenter, they've made that against their EULA (unless you're mining cryptocurrencies which is apparently A-OK).

http://www.datacenterdynamics.com/content-tracks/servers-sto...


Good catch! We don't own hardware ourselves, but we are considering EULA with our partners.


Serious question (as opposed to a snarky one): what happens when/if the crypto currency market takes a large downturn? If your pricing is based off using their cycles, Snark.ai would have to raise prices, right?


In case of crypto downturn renting GPUs will be even cheaper. Actually, it will be a bigger win for our users and also for us.


Assuming the miners still want to rent out the hardware and not sell it :) Even then, you'd have an instance of two parties both taking slices of the profit, which is hard to compete in a practical manner against the Amazon/Google


Yeah exactly. Hopefully AI computation market will grow big in time so those crypto-miners can pivot into more profitable business of GPU cloud for AI instead of selling the hardware in crypto downfall. In the long term, Snark AI wants to create the marketplace where all the qualified GPU providers can bid for the best market price which will benefit everyone.


Really interesting.

Is storage persistent? Will files get deleted when I stop a machine?


Thanks for asking, thats a good point! Compared to spot/preemptible, your files will be persistent. You can stop your running pod, add more GPUs and then continue training your model.


The Graphic Card's price will reduce very soon due to bitcoin pricing drop.


That's what we count on as well.



No, we only work with large scale hardware providers that guarantees security and reliability. We specialize in Deep Learning services including training and deployment.


Then it is a copycat of nicehash.

I wonder how they plan to undercut nicehash?


I understand them to be totally different services. How would someone with ML training data on hand use nicehash to get any work done? Is snark setup to do mining? As I see it they serve totally distinct and separate markets.


It's interesting that you're drawing the comparison with NiceHash. Snark AI will be a marketplace like NiceHash but targets mostly deep learning training and inference on the user end.


I don't know if I overlooked it, but it doesn't seem possible to delete my account if I so choose.


Just shoot us an email at support@snark.ai, we will handle it :)


I just realized how my commment sounded, sorry about that; I only wanted to make a note of that while I evaluated your platform for broader use.


ah no worries, thanks for helping us out! sure will add automatic deletion support.


I'm not personally super invested in this, but you might want to check to see if that violates GDPR.


It doesn't, if they do it after request.


Doesn't GDPR mandates that you need to provide a self-serve way to delete your account?


You might be confusing it with the new California law (which would apply to snark.ai, per their own terms doc) that requires any renewing service offer that can be initiated online to have an online way to cancel.

https://www.perkinscoie.com/en/news-insights/california-upda...

I don't think that extends to full account deletion, though, more about stopping recurring charges.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: