It wouldn't be too hard to do any of the things you mention. See ControlNet for Stable Diffusion, and vid2vid (if this model does txt2vid, it can also do vid2vid very easily).
So you can just record some guiding stuff, similar to motion capture but with just any regular phone camera, and morph it into anything you want. You don't even need the camera, of course, a simple 3D animation without textures or lighting would suffice.
Also, consistent look has been solved very early on, once we had free models like Stable Diffusion.
"second party developer" is kind of a loose term. Rare had close relationship with Nintendo at the time, and were allowed to produce games for some of Nintendo's IP, but they ultimately ended up being acquired by Microsoft, so they weren't so exclusive that Nintendo could prevent that outcome.
With SD I can generate at least 15k images daily on my old laptop, I can train it with new styles, characters, real people, etc.; download thousands of new styles, characters, real people, etc. from Civitai, and best of all, never worry about ever losing access to it, being censored, having to jailbreak it, being snooped on, etc.
Plus a million other tools that the community has made for it, like ControlNet or things like AnimateDiff to create videos. I can also easily create all kinds of scripts and workflows.
"Lactase nonpersistence is most prevalent in people of East Asian descent, with 70 to 100 percent of people affected in these communities."
Masai herdsmen are East Africans; so I find that statistic bewildering. "Lactase nonpersistence" means the decline of the production of the enzyme lactase beyond infancy. I don't know whether lactase nonpersistence is equivalent to lactose intolerance.
A (very small) percentage of the population is alergic to peanuts. They can even die if they ingest a peanut by accident. Does it make so that "peanuts are bad for you"? Can I make this argument? Milk is perfectly fine and is an important part of a healthy diet, for 90% of the population of the Americas, Europe, Africa etc. The magnitude of the propaganda to make people think otherwise simply baffles me.
> Tell that to the Massai, that live basically on a diet of milk, meat and blood
They also live a mainly active, nomadic lifestyle herding cows rather than sitting down staring at a computer screen - and usually die in their forties.
They specifically adapted to survive on that diet due to both group level isolation and individual upbringing (gut bacteria development). Different socio-cultural groups often have different tolerance for common food items.
It's indifferent to the question. Some food is either healthy for human consumption or it's not. The argument is: if a population of humans exist that makes milk a very substantial portion of their diet, and is perfectly fine, arguing that it's bad for humans is complete stupidity.
That's why I said "right now", since I feel that most people have moved from the one you linked to AUTOMATIC's fork by now. hlky's fork (the one you linked) was by far the most popular one until a couple of weeks ago, but some problems with the main developer's attitude and a never-ending migration from Gradio to Streamlit filled with issues made it lose its popularity.
AUTOMATIC has the attention of most devs nowadays. When you see any new ideas come up, they usually appear in AUTOMATIC's fork first.
Just as another point of reference. I followed the windows install. I'm running this on my 1060 with 6GB memory. With no setting changes takes about 10 seconds to generate an image. I often run with sampling steps up to 50 and that takes about 40 seconds to generate an image.
They sure do. InvokeAI is a fork of the original repo CompVis/stable-diffusion and thus shares its fork counter. Those 4.1k forks are coming from CompVis/stable-diffusion, not InvokeAI.
Meanwhile AUTOMATIC1111/stable-diffusion-webui is not a fork itself, and has 511 forks.
Google is paying, and yes, you can, but they will disconnect you after a while. And if you abuse it too much, you won't be able to use it until the following day...
You can also buy Colab Pro and Colab Pro+, which have fewer limitations and faster GPUs.
In my usage Colab and Colab Pro were similar, with plain Colab occasionally OOMing during model loading. That said I've actually been seeing times slower than yours on Colab and I think they're slower than on my RTX 3080. ~15 secs per image. I'm not sure why, though.
You are much better off running it locally at those speeds. P100 does 13 to 33 seconds a batch in my experience. Cloud to cloud data transfer (Hugginface to Colab) is ridiculously fast tho.
I'm on Colab Pro and get about 3 steps per second when generating a single 512x512 image at a time, with slight throughput improvement when I batch 2-3 images
So you can just record some guiding stuff, similar to motion capture but with just any regular phone camera, and morph it into anything you want. You don't even need the camera, of course, a simple 3D animation without textures or lighting would suffice.
Also, consistent look has been solved very early on, once we had free models like Stable Diffusion.