Am I the only one disappointed by this new wave of AI hype? Seriously, autogenerated avatars? When are our best minds going to work towards actually improving the lives of people? Not monkey JPEG scams, not chat bot bureaucracies, not self-driving cars that fail in bad weather and kill you, but something that actually alleviates humanity's biggest problems. Energy, homelessness, pollution and the environment, affordable housing crises in the West, content moderation, the loneliness epidemic - there's so much that needs to be done if we can just step out of the current hype cycle and focus on human problems, not just looking for ways to use cool tech.
I'm pretty sure "our best minds" aren't the ones building the ape avatars. They are working on the fundamental advancements of AI itself. Once those tools are built, it doesn't take nearly as much "best brains" to tweak them for specific applications.
In other words, you can just think of all these things (cash grabs and otherwise) as an effect of the rapid advancements in AI, in this case manifesting as a growth area of economic activity. As another poster said, others are working on medical applications, climate models, etc. It's not a zero-sum game on the application side.
How is AI going to solve the problems of energy, homelessness, pollution, the environment, affordable housing crises in the West, content moderation, and the loneliness epidemic?
Think of what you’re asking from it. You don’t ask coke to solve corona or google to solve world hunger. None of those issues are bottlenecked by AI. If you want to solve those issues, you shouldn’t expect AI researchers to fix them, you can be that change.
They are asking not for an AI to solve these problems, but to take the brains of those people who develop AIs and put them to use for other, more productive fields. Which is something I tend to agree with and would add advertising, finance/quant funds and cryptocurrencies into the mix - humanity doesn't need either of these for survival.
However, the core problem is that we have solutions for all these problems with technology that exists today:
- energy can be produced by solar, wind and for baseload geothermal and biogas. We might need to shift certain high-consumption industries such as smelters to seasonal and time-flexible production though, to accomodate a lack of solar power in winter and at night.
- pollution is a solved problem as well. Place filters on the exhaust stacks of industries that absolutely need some kind of burning stuff, transition ICE vehicles to electric vehicles and eventually, switch a lot of the traffic load of individual cars to a mesh of public transit (tramways and light rail), to remove the emissions caused by tires.
- affordable housing is a solved problem as well. Build socialized, community-owned housing like Vienna does, nationalize large landlords, improve infrastructure in rural areas to remove pressure on urban housing, and regulate where large employers can set up shop to avoid concentrations that cannot reasonably be supplied with workers and traffic.
- content moderation is a plain and simple matter of employing enough people
- "loneliness epidemic"... that's the only tough one. We do know what causes loneliness, and a part of that is that young people have to move across the country, often enough across continents, to find gainful employment. Fixing rural infrastructure should help a bit, as should better availability of decent affordable housing - IIRC, a contributing factor both in Japan and Italy was that young people can't afford to move out of their parents' homes or if, then only into essentially sheds, so neither is conductive to invite a potential partner to.
The problem is, politicians today are not driven by what science and history has shown to be successful, they are driven by party ideology and populist bullshit. That leads them to take decisions that make situations objectively worse.
Check out the climate change ai[0] community! There are many people working on the interesting interface between ai and relevant problems.
I think the main thing to take into account that many of these problems and organisations still live in an age of 200X tech, so already implementing current age practices can result in great improvements (although this does not result in flashy press announcments or papers).
Machine learning has been used for cancer imaging and geospatial detection for some years now. You can't blame the internet to hype the funs and lucratives parts of IAs first.
>Energy, homelessness, pollution and the environment, affordable housing crises in the West, content moderation, the loneliness epidemic
I think if an capable AI in the future is given the opportunity to fix this with zero restrictions, Then most likely it is going to uproot entire system of governance along with several ultra-rich and powerful people.
thank you for saying what I have felt for the last 20yrs in tech.
I will humbly add going to Mars to the list of wasted resources.
It’s already gone beyond the funny meme phase.
Startups and media business are looking to make a windfall on AI generated art, music, code, writing, and other services. The payment models will be subscriptions, pay per use, and other models that make more money the more content is produced.
But there's still no AI (with associated mechanics) that can fold laundry.
Much like my fears about bluetooth connected cars being hacked to crash on the highway, it turns out that - by and large - nobody wants to kill me (or at least, not badly enough to do anything about it).
Ah yes the nobody wants to do it to me excuse. Until you piss off the wrong person and you suddenly crash into the railing and die in an 'accident'.
In a more Orwellian world. It can be used to assassinate dissidents or individuals who speak out against authoritarian regimes. It's just a tool in a box, but it's one that's simple, easy to use, and leaves no evidence.
But that’s true with or without the AI. Anyone could decide they want to kill you. Most of the time we rely on “not everyone wanting to kill everyone else” to get by.
No way would I bring some company's robot into my house, especially not one that has anything to do with Google. Maybe it does your laundry and dishes for you, but you can bet that it'll be recording everything that it sees and hears and sending all that data back to the people it actually works for so that they can use that data against you or sell it to someone else who will.
Unless I can find a model I can verify has zero networking ability and isn't gathering and storing data for somebody else, no thanks!
FoldiMate was a California-based company developing a robotic laundry-folding machine founded in 2012. Their clothes folding machine was aimed to enter the market by the end of 2019. *In 2021, the company folded*.
that's what I'm talking about. I think AI would approach "laundry folding" in ways we never thought of, using simpler machinery than would have been expected in novel ways. this is pretty specific to something very geometric / topological like folding things.
Unfortunately, doing stuff while being subjected to random accelerating forces makes many people feel nauseous, and physical tasks become more difficult.
why? in what world are you so busy that you dont have "enough time" to go to your home, take time to unwind, clean up after a day's work, cook a meal for you and your family and enjoy a family dinner/lunch/breakfast?"
replace cooking with laundry/cleaning/bathing/repairing/small fixes around the home?
is life REALLY SO FAST AND TOUGH that you have to do multi-tasking basic human social/personal proceses?
If this were true, restaurants would already have these robots.
Also, I'm talking about real cooking, not some mechanized approximation that produces meals akin to fast food, or the microwave stuff you can already buy in the supermarket.
As a robotics engineer I don’t think it’s true that “if it were a robotics problem it would be solved.” Robotics for non industrial uses are basically in their infancy.
Also I am of the opinion that the best way to make a meal is a purpose built series of small machines to perform different tasks. So you have an onion preparation machine which peels and cores the onions and maybe also performs onion-specific slicing. You have a vegetable washing machine to wash potatoes and carrots and zucchini etc. You have various cutting chambers that feed in to various cooking systems.
Doing all of this is possible today and I don’t believe it has to be limited to the low quality food you mention. But developing all of these specialized systems is extremely expensive. I actually really hope to work on all of this stuff as a massive open source project once my open source farming robot project gains enough traction. I literally obsess over this problem.
Otherwise you could imagine something like a pair of robot arms and a vision system on a track in a normal kitchen, but again robotics really hasn’t been able to produce functional or affordable human like hands, and the software to handle them is also in its infancy (that part is an “AI problem” though.
Anyway robotics is extremely expensive and low minimum wages means it’s cheaper to abuse migrant workers in the kitchen than pay for all the R&D necessary to really solve these problems.
But my hope is that an open source project could get the ball rolling and then the costs required to finish everything could be spread among many different groups once the basic concept is proven.
They can stack vertically such that they take up less space than a commercial kitchen, which cannot stack vertically (at least not on the scale of multiple separate operations in one vertical meter) and requires space for humans to move around. An onion prepping machine might be 30cm x 30cm x 100cm. You could fit 11 mechanisms of that size in one cubic meter.
My view on cleaning is that the systems must be automatically self cleaning. Otherwise yes cleaning would be a pain.
Interesting . I agree that arms and hands are probably not necessary and that custom tools are better suited. One think that came to mind when you mentioned many chambers and cooking systems; these must be easy to clean to avoid food waste getting stuck.
Oh absolutely. My view is that integrated automatic cleaning must be part of the system design. Such a system would be a huge pain if it was not self cleaning.
In a big city in the Netherlands we stopped collecting plastic, metals separately. This because there are better machines now that reach higher accuracy levels.
Aye sorta like how we all use a separate PC per app :P
Each to their own, I had plenty of folks saying they're bad and break often but I've never had one do a bad job assuming it's being maintained - cleaning lint traps, not overloading it, descaling the system now and then (hard water area)
So damn convenient to chuck a load in before bed and come back in the morning to nicely clean and still toasty warm clothes ready to go
That’s because folding laundry is way, way harder.
I think it’ll be a long time before automated laundry folding is commercially viable at household scale (factory scale is another matter) simply because as other, easier, more lucrative activities are automated, the cost of “unskilled” human labour will be driven down faster than the cost of the equipment required to fold laundry.
You put AI to create some shit for you. Sell it. Hire a cleaner, lawnmover whatever. Or better yet, make AI to sell that shit and let it hire those people.
We’re not very far from some Satoshi putting it all together.
One interesting idea I saw somewhere on the internet was that we might see a return to more mechanical/hands on professions while low level white collar jobs get destroyed in the same way old factory jobs did.
Kind of. They all run Stable Diffusion because they released fully open source.
There’s still competitive advantage to owning, training, and gatekeeping access to models. MidJourney and DallE are both superior to Stable Diffusion along many axes.
Monetizing models is tricky because it’s so cheap to run locally but so expensive in the cloud. Except if you release your model such that it can run locally all advantage is lost.
I wonder if there is a way to split compute such that only the last 10% runs in the cloud?
Why is it expensive to run in the cloud and cheap to run on a device?
1. Commodity hardware can do the inference on a single instance (must be true if a user device can do it).
2. It’s apparently possible to run a video game streaming service for $10/month/user.
3. So users should be able to generate unlimited images (one at a time) for $10/month?
Maybe the answer is the DallE/Midjourney models running in the cloud are super inefficient and Stable Diffusion is better. So the services will need to care about optimizing to get that kind of performance. But it’s not inherently expensive because they run it on the cloud.
I wouldn’t assume those $10/mo gaming services are profitable.
It’s not that running in the cloud is more expensive. It’s that people already have a $2000 laptop or maybe even $1600 RTX 4090. If I’ve got that I don’t want to pay $20/month to 6 different AI services.
Sam Altman said ChatGPT costs like 2 cents per message. I’m sure they can get that way down. Their bills are astronomical. But the data they’re collecting is more valuable than the money they’re spending.
Stable Diffusion isn’t super fast. It takes 30 to 60 GPU seconds. There’s minimal consumer advantage to running in the cloud. Id run them all locally if I could.
The problem is (as always) the "bad user" case. You get some users who run at 100% utilization full time (or more because depending on your model they might be able to have multiple instances). They'll be the ones doing things like running a Discord bot in a popular server, or reselling the image generation or something.
Many users will use the service at once though, not evenly distributed ... so you might wanna overprovision. Which is basically what you dont wanna do - profitability is reached by underprovisioning.
I think for an AI generation service this problem is actually more solvable than usual. You can slow down how fast the results are returned, which will slow down the demand. Charge more for a higher tier that gets prioritized. People are going to be a somewhat bothered if the result takes 10 seconds instead of 1 second, but it’s not the end of the world if it’s a rare event. If Netflix can’t keep up with demand and your video spends half the time buffering that would be a worse failure mode.
Some random stats for successful web services (unit is average minutes of use per day per user):
YouTube - 19 minutes
Spotify - 140 minutes
TikTok - 95 minutes
Netflix - 71 minutes
So we’re looking at roughly a 1% - 10% utilization range, depending on where your game streaming or AI inference app falls. You need to factor that in when figuring out the pricing, your competition certainly will.
My intuition tells me GPU utilization is very different. Those services are egress bound. Egress is super elastic and can be scaled to stupefyingly large numbers.
GPU utilization is less scalable. No GPU cloud service is particularly popular. I don’t think any of them are profitable. Having 1:1 GPUs to users use tough.
Gaming is especially difficult because it’s super latency sensitive. Which means you need racks or expensive GPUs sitting in hundreds of edge nodes. I’m super bearish on cloud gaming still.
ML tools aren’t that sensitive. They’ll exist and they’ll be profitable. But I think the economics are tough. And as a consumer I’d still greatly prefer to run locally. Which means there’s a tension between “good for the customer” and “good for the business”.
Can't wait for a court to toss that particular one out. "Consumers who purchase a product containing a copy of embedded software have the inherent legal right to use that copy of the software" (Chamberlain v. Skylink)
AFAIK Nvidia restricts which GPUs you can run in a datacenter, so you cannot buy, for instance, RTX 4090 and use it in a datacenter. You need to buy the datacenter, and much more expensive, cards.
Gpus are expensive. You need at least 10 gpus to quickly render stable diffusion images. If you want to run a service you need more of them. Thousands per month easily reached.
>> Monetizing models is tricky because it’s so cheap to run locally but so expensive in the cloud.
Can you expand on this a bit? The way i'm thinking, that is only the case if you need low-latency. And in that case, it seems you just need to charge to cover compute.
We're running Stable Diffusion on an eks cluster and it evens out the load across calls and prevents over-resourcing.
If latency isnt an issue, it can be run on non-gpu machines. If you're looking for someone under $300 or $400/mo, then I agree it may be an issue.
On that note, I havent checked whether there are lambda/fargate style options which provide GPU power, to achieve consumption based pricing tied to usage, but that might be a route. Can anyone speak to this?
>On that note, I havent checked whether there are lambda/fargate style options which provide GPU power, to achieve consumption based pricing tied to usage, but that might be a route. Can anyone speak to this?
Thanks for this. This is nice and the prices are great...but I was specifically curious about something where consumption can be tied to cost (e.g. lambda/fargate style where you pay by the call)
It's not quite lambda, but GKE auto pilot supports GPU workloads, so it could be a relatively easy way to do this.
You could have a rest service sticking incoming requests into a queue, and then a processor deployment picking off the queue using the GPU resource requests / spot instances. You'd probably also want something to be scaling the processor deployment replicas based on the queue depth and your budget.
I haven't compared the pricing to EKS so unsure if it would really be better financially, but it would avoid having to manage scaling up/down GPU nodes explicitly.
> If you're looking for someone under $300 or $400/mo, then I agree it may be an issue.
Yeah. These models don’t need special resources to run. As a consumer I would prefer to buy a 4090 and then run everything locally. I don’t want to pay $10 or $20 monthly subs to a half dozen different AI services. All professional software turning into subscription services sucks.
Midjourney charges $30/mo for unlimited “relax” time and 15 hours of fast GPU time. That’s not too bad. But multiply that by 6 services and a 4090 pays for itself in a few months.
Midjourney is completely different from SD on a computational level.
SD is optimized for speed, it takes 5 seconds to generate a 512x512, and their internal optimizations are bringing it down to 0.5 seconds (stated on their twitter). To achieve this, they do one-shot generation straight to 512x512, without upscaling slowly from 64x64 -> 256x256 -> 512x512
Midjourney is optimized for quality. It actually does do the gradual upscaling, which is how the Imagen and Ediffi papers demonstrated. This results in far better quality, but extremely taxing and slow. Even on 'fast' mode it runs like a snail compared to SD. I don't think it'll work on anything below a A100.
> To enable Bunny AI, simply enable the feature under the Bunny AI panel and start generating!
Where is this exactly?
Also, are the other AI image generation tools that can be used now at low prices? I've tried some of the free ones, but they have two hour wait times sometimes!
There's a huge difference between diffusion models that were built to be run on commodity hardware and the huge autoregressive models like GPT. You can't even run GPT3 on the cloud without some specialized interconnect.
How do you know this? Not doubting you just curious. I've always been curious about requirements or size of GPT3 because Eluether's GPT-X 20B takes like 40GB VRAM to run and I think it is the closest analogue to GPT-3
No, you can’t build a cluster of GPUs to run GPT without special very fast interconnect like InfiniBand. Stable Diffusion can run on a single GPU, like 3090 .
Looks like it is generating on-the-fly. No? Second request for each generation (unique number) takes no time.
for a in `seq 1000 2000`; do wget "https://bunnynet-avatars.b-cdn.net/.ai/img/dalle-256/avatar/email-${a}/rabbit.jpg?width=128&hiEbunny=is_this_secure_though" ; done
I can't understand the need for this kind of thing as there are so many options for using Stable Diffusion for very cheap (or free) and of course Dall E has its own UI. What's the point of using a service like this (besides getting free compute while they are launching)? Do we really need another service aggregator?
Feels a bit gimmicky to me, but maybe I’m missing some need in the market.
I wonder about auto-generated captchas perhaps? or are these going to be easy to reverse?
On a side note: I’d love to switch from Cloudflare to bunny, but it’s missing a WAF. We were promised it from bunny for a long while, but didn’t see it yet. Personally I would imagine it being a more core feature for a CDN than AI bunnies on the edge, but I guess I’m old and boring.
While this is cool, using an MD5 of users’ email addresses is not cool. Given a large rainbow table you could likely find the identity of lots of users and tie them to specific comments where they thought they were anonymous… not th intention when enabling funny avatars…
I think a standard model is not very useful. We are doing similar but with your own custom model as an api https://88stacks.com
I do like their real-time api though and we are moving towards that as well. It just costs a lot more to do in real-time.
You should pick some better examples on your homepage. This looks completely laughable compared to anything you can get from MidJourney on your very first attempt (it's amazing with cars):
> Bunny AI is currently available free of charge during the experimental preview release and is enabled for every bunny.net user. We want to invite everyone to have a look and play around, and share the results with us. Bunny AI is released as an experimental feature, and we would love to hear your feedback.
This seems like a massive waste of resources. If the image is generated by AI it's clearly not offering very much to the reader but they still have to load it and scroll past it to get to the actual content.
They talk about how they use two models in the article, DALL-E and Stable Diffusion. So it seems like multi-model is correct. Why is it a bad title if there's no typo?
As I sad: UX is more important. We will focus on bringing better experience for the users. Managing thousands of boxes and combinations is not a design work goal. Solving a problem for the users is.
Good evening, Who could help me unmask my hacker who is none other than my dear naughty cousin. Certainly very strong in his field but who does not digest that his nasty little cousin did not at all appreciate being raped from these 4 years and this for 10 years my silence allowed him to build his empire and I filed a complaint well after his ascent. A dismissal was pronounced and not an acquittal. No place = lack of evidence and witnesses, yet everyone is aware. For 14 years he has been spying on me, hacking my phones, computer, cameras, microphones even in my daughter's room, in the toilets, in short James Bond bathroom live. To see the police you need proof and he has a long arm finally this family full of success have very long arms, I am the ugly duckling me who dares not to accept submission. Finally it is a real hell that continues. So if anyone can help me?? He is known at least he was known under the pseudonym of kerim. He works at Expleo (Toulouse, Colomiers) His name is Abdelkrim Mimouni Do not hesitate to contact me if you too do not accept the unacceptable whatever the status, whatever the power, the number... Thank you very much, I do not don't know what to do anymore good evening "They thought of burying
adenozine
Unfortunately it is the reality that far exceeds the fiction. I may have expressed myself badly, it's happening to me. A waking nightmare is mind-boggling. I don't know what to do, I send an SOS. Sorry to bother you, while searching I came across this application, so lost for lost I launch this SOS
Unfortunately it is the reality that far exceeds the fiction. I may have expressed myself badly, it's happening to me. A waking nightmare is mind-boggling. I don't know what to do, I send an SOS. Sorry to bother you, while searching I came across this application, so lost for lost I launch this SOS
Unfortunately reality no schizo
Fantastic. I use all of bunny's services across all of my companies and can vouch for the absolutely fantastic service they provide at the best cost. Use them blindly for all your needs.