You could have a monthly fee and decide what percentage of it you want to go to specific artists. If you listen a lot to an artist the app could ask you if you want to transfer some percents from one artist to this one. That is a feature that I would personally love to use.
I moved from Cubase to FL two years ago. I'm in love with the patterns in FL!
The only thing I really miss was the feature in Cubase where you could add effects to an audio pattern in a destructive way. You could create really complex and glitchy patterns and it was easy to mix them together (cross fade and such).
The other thing I would love ImageLine to do is a better workflow when you use audio samples directly in the sequencer. Things like fade in and fade out and a much bigger zoom overall in the sequencer to move samples around.
Edit : I know you can work with Edison but it isn't intuitive imo
Imagine an RTS or something similar where you play with an AI and he learns your behaviours. He tries to help you. At first you deny what he does, he learns, do something else. A bit like the beast in black & white!
I'm no marketer but nobody want to read an ad for fun. You have to shove it onto their face. Even if you are angry, now you know about the product and the idea of buying it might get to you later on. I guess it is something along those lines.
See now I disagree with that, when it comes to for example, podcast ads, I don't mind them. They're usually read by the host of the podcast, it's done in a very clear "this is an ad, this company is helping us make this thing," it's not an attempt at trickery, it's not a lie, it's just there. And it's over and done with and I'm %10,000 more likely to follow up on one of those products (not to mention they're usually pretty relevant to the audience of whatever podcast).
Advertising itself is not a bad thing, it's a critical part of any capitalist society. I just don't understand how we got to the point where seemingly every ad is as annoying as it could possibly be, that every ad company is just in a race to the bottom of "how can we be as intrusive and unsubtle as possible."
There's pretty much no way to avoid writing a large comment in
response to this. So, with apologies...
It was my focus for a long time to achieve perfectly photorealistic
rendering. I come from a gamedev background, and I've been fascinated
since around age ten about how to get a computer to paint pictures. By
17 I was writing game engines mostly equivalent to quake, and used
this portfolio to get into the industry. The next three years were
spent getting as close as possible to the bleeding edge of real-time
graphics development.
I remember having long debates with a colleague about what
high-dynamic range lighting "meant." "If we spin around in our office
chair, our brains do not suddenly change the overall brightness of
this office. Why should we be programming games to do this? Why is
everyone doing things that way?"
My concerns were of a more fundamental nature as well. What is a
diffuse texture? A diffuse texture is a well-understood concept in
realtime graphics. Anyone with a basic knowledge of shaders should
immediately recognize:
color = lighting * diffuse + ambient
Trouble is, it doesn't correspond to reality even slightly. It's not
even roughly close. It happens to look good to humans, and that's
why we use it. But the trouble was, the further I tried to probe
the mystery of realism in computer graphics, the more I ran against
this phenomenon of "We use X technique because it looks good."
So I turned to research papers. Books. The medical field. Everywhere
that was remotely related to possible breakthroughs in photorealistic
rendering. Research papers are excellent for assembling techniques,
but not results. The books in human vision and color science were more
promising, yet most of the industry seemed (and still seems) to pay
little attention to them. Compare a book about color perception to,
say, http://www.pbrt.org/ and you'll see a stark difference. Flip
through the table of contents and you get transformations, shapes,
primitives, color and radiometery, sampling, reflection, materials,
texture, scattering, light sources, monte carlo integration...
And for what? We know that these techniques simply do not produce
computer-generated videos that a human will identify as a real-life
image. It's not for lack of processing power. There is a disconnect
between the old rules and those that will ultimately result in
real-time realism, and you won't find it in that table of contents.
Now, the trouble with writing all of this is that if I knew how to do
it, I'd have done it already. It's a life-long search, and it's not so
easy to refute an entire industry without being (rightly) dismissed.
But if you wish to know what I suspect are the ways forward, it's
this: Get a camera. Take photos. Compare these photos to the results
of the algorithms you write. Iterate on your algorithms until they are
producing results that match something that already captures nature,
not our beliefs about how we ought to be able to capture nature. "Just
throw in physical models and presto!" has not thus far been true.
You'll notice, for example, that computer graphics in videogames have
plateaued. They get more impressive with each generation, but that
impressiveness does not get them progressively closer to looking
real. Nor should it. A computer game tells a story. The closer it
looks to real life, the more restricted the artists are, along with
the rest of the design of the game.
So we turn to the movie industry for hope. But it's restricted in
exactly the same way. The research papers are all along the lines of
new techniques to try, or studies of existing pipelines and how to
deal with their complexities. It's not fundamental research.
As someone who has spent his life in pursuit of realism in
computer-generated video, my recommendation is this: Read DaVinci's
journal. Pay attention to what each page is saying. He had to discover
from first principles what makes a painting look real, and why. You'll
notice that he spends most of his time talking about human vision and
our perceptions of color.
If someone is going to make this development happen, it's not going to
come from the game industry, and it won't come from the movie
industry. That leaves you. Hopefully this will encourage some of you
to pursue this. Once you accept that most of the computer graphics
industry isn't actually focused on achieving realism, you'll start to
develop your own techniques. My hope is that this will eventually lead
to a breakthrough.
1. "Take a picture and make it look right" is exactly what people have done for things like POVRay since at least the 90s. I can't find it now, but at one point someone set up a glass ball on a checkerboard and used a point light and a camera to confirm the diffraction and distortion models were correct because someone claimed they didn't look right or were doing the wrong thing. The math that's there for rays, shapes, diffraction, diffusion, caustics, etc. is accurate, and necessary but not sufficient. Which brings me to
2. CG realism has generally hit the Uncanny Valley by now. It's so close to real that we think "that's pretty good", but it's far enough away that we still know "something's wrong". It's the difference between a dummy, a corpse and a live person.
A couple of examples I remember from the past decade are laser + milk and better skin rendering on one of the NVIDIA demos a while back. There wasn't (isn't?) a good model to simulate the diffraction and subsequent diffusion of a laser shining into a glass of milk. Actual lasers with actual milk don't do the things we expect of modeled lasers in simulated milk. Some component is missing, but all the existing math is right for lots of other cases. The NVIDIA skin thing was adding 3 or 4 layers to an existing model to simulate subsurface scattering and reflection that happens in skin, vs old models that treat skin as paint. The old stuff was right, just not enough.
All of that aside, there are decent photorealistic rendering options for some materials today, but at the cost of CPU hours of render time. If you can do better then please do, even if it's just for one material or one physics action.
I appreciate you probably put a lot of effort out but your post largely sums up to "They are wrong, I've done extra curricular research to prove it, I have no alternative"
While noble, I cant help but wanting to understand what alternatives you were leading yourself done. As an example, light can arguably simplified to intersecting cylinders and spheres that bounce off surfaces to create new 3d shapes. Each shape also would have an origin 2d shape based upon whats reflecting it. an "eye" reads shape intersections with self and also can filter those intersections in respect to origin shape. After each bounce, the new shape takes form as the bouncing lights color multiplied by the color of the bounced object In low light situations, subtle luminosity differences can be enhanced.
What I did was offer an example. Perhaps youll one day be successful but I got the impression you are some kind of renegade with a mission. While I can certainly relate to that, I view science and building the future quite far from renegade status. And in the mean time, you gave me a sob story with no algorithms/solutions except for "take real pictures and compare them". As a lazy programmer, walking outside and discoveringvthe world doesnt interest me too much.
We can come up with hundreds of simplified lighting models. Have you tried the one you mentioned? Does it correspond to photographs, video? That's the hard work, and it's why I listed none.
Here's something specific: What did you mean by "multiply"? You cannot "multiply" colors. Not unless you concede that your model has nothing whatsoever to do with physical reality. And at that point, why not use a photo of nature (or your eyes' perception of nature) as a baseline comparison?
"The phenomenon of colors depends partly on the physical world. We discuss the colors of soap films and so on as being produced by interference. But also, of course, it depends on the eye, or what happens behind the eye, in the brain. Physics characterizes the light that enters the eye, but after that, our sensations are the result of photochemical-neural processes and psychological responses.
There are many interesting phenomena associated with vision which involve a mixture of physical phenomena and physiological processes, and the full appreciation of natural phenomena, as we see them, must go beyond physics in the usual sense. We make no apologies for making these excursions into other fields, because the separation of fields, as we have emphasized, is merely a human convenience, and an unnatural thing. Nature is not interested in our separations, and many of the interesting phenomena bridge the gaps between fields."
Walking outside and discovering how the world looks is exactly how to improve your techniques as a graphics programmer.
I work in film vfx, and if you think we dont compare our rendered images to real photographs and video constantly, then you dont really know how modern graphics are produced. We shoot reference for everything, and use things like gonioreflectometers to sample real world BSDFs. Yes, lighting models are always necessarily simplified from the real world physics. But the thing is that 99% of the time that doesnt matter, because most common types of surfaces are able to be reproduced accurately enough to fool the majority of people. I would wager that most of the CG things you see these days on TV or in movies you have no idea was CG.. it's only the bad stuff (or obviously impossible) that stands out.
I actually am a bit confused what you are arguing here.. all of these problems you're mentioning have been well-understood by graphics researchers for the past couple decades.
The main point was that film isn't interested in exact photorealism. As you said, it doesn't matter, because the simplified models are good enough. Therefore it's unlikely that the film industry will be the first to produce a fully computer-generated video that will be indistinguishable from a camcorder capture.
The reason most of the CG you see in TV or movies looks very good is because they take place within real video. We're not looking at a completely CG scene -- it's mixed with video from the real world. And that's a perfectly valid technique, but my comment was talking about 100% CG.
A secondary point re: the film industry is that artists must necessarily retain control of the art pipeline in order to create scenes that advance the plot. That requires the art pipeline to be flexible. The more flexible your art pipeline, the more productive your studio is. Yet that flexibility is precisely opposite to realism. Obviously, the more realistic a purely CG scene looks, the less flexibility you get, otherwise it wouldn't appear real; hence the argument that the vfx industry won't be the ones to produce the elusive fully-CG fully-realistic video. (It doesn't make financial sense for them to do so, if nothing else.)
Again, I would wager that you have seen many, many shots on TV and in movies which were fully, 100% CG already. You just didn't notice because they really were indistinguishable. We've definitely already crossed that boundary in many areas, and the main things which still stand out is just bad work.
I've been diving to light, color, and color perception the last couple weeks as part of a fun work project. I've got a diffraction grating spectrometer on my desk that measures light in half nanometer wavelengths. From the papers that I've read, and the stuff I've messed with so far, it seems that human perception of colour - on average - is well studied, and has been since the 1930's. We know how light gets into the eye, and how much each wavelength of light will excite the average eye's individual color specific sensors.
Once the info leaves the receptors, now that's a another story...
Perhaps a way of stating it is that you want to create computer programs which do the same image processing that our brains do. That, I think, is what you're getting at: we don't see "reality", we see images processed by our brain, from signals received by our eye, mixed with a bunch of learned and instinctual models already existing in our brain. Accurately modeling the signals entering our eye is not enough. We also need to perform - at least approximately - the some processing that our brain does.
Do a search for photorealistic ray tracers. Photorealism is inevitable as we are only projecting onto finite displays. Enough samples from an accurate enough model will result in bits that are indistinguishable from a live capture.
Light can be multiplied, in a way, through interference.
so you want to store an amplitude and wavelength along with your shapes instead of color? Go for it. You want to add in psychological factors? Sounds wasteful.
Your type of examples seems like uts more useful for machine learning. Collect enough samples, build enough models, run enough tests and youll get something accurate-ish.
I thought you were going to go with a different angle on this. If all you care about is photorealism, well, we'll have to agree to disagree that the current state of the art is incapable of that (given enough time).
What I thought your original point was is that 3D rendering ONLY cares about photorealism. Physically based renderers have been greatly influenced by photography (both still and film). Think light probes, camera lenses, etc. Much of the post-processing you see in a scene is also derived by what a camera would see, from focusing, to blur, to HDR now, and of course the infamous lens flare!
So I think a physically based renderer which uses the human eye as its camera would be interesting to see more of.
The recommendation to get a camera is a very good one. A camera used for an extended period of time is a very good teacher, and not only of seeing.
To go even further, learn to draw. Use your hands and other senses instead of only thinking. Read Drawing on the Right Side of the Brain by Betty Edwards and The Hand by Frank R. Wilson.
I'm curious what your favorite games are, visually, @sillysaurus3?
A couple of mine are Far Cry 2 and [Elite: Dangerous][1] Horizons. I also love the sound in both of these which is to me a seemingly inseparable element to great screen work.
I'm curious what your favorite games are, visually, @sillysaurus3?
I've been considering an answer to your question for the past couple of days. As cheesy as it sounds, I suppose my favorite isn't necessarily a game, but a video:
This was what I grew up with, and what helped inspire me to pursue graphics programming. I used to watch it on loop dozens of times, being amazed at how graphics got better with each game.
That was back in the day when you had to download codecs for DivX, and find these videos on Kazaa... Ah, memories. :)
A couple of mine are Far Cry 2 and [Elite: Dangerous][1] Horizons. I also love the sound in both of these which is to me a seemingly inseparable element to great screen work.
Good choices. Absolutely agreed re: sound. Sound and animation are both vital (but often overlooked) elements of a good story, with all the focus on graphics nowadays.
> And for what? We know that these techniques simply do not produce computer-generated videos that a human will identify as a real-life image.
This is actually testable. Some archviz images are indistinguishable from reality for non-graphics experts.
> Get a camera. Take photos. Compare these photos to the results of the algorithms you write. Iterate on your algorithms until they are producing results that match something that already captures nature, not our beliefs about how we ought to be able to capture nature.
Yes, very important. Art teachers try to reinforce this by saying "seek reference". It's true for graphics as well, and it's possible to be more empirical.
Some archviz images are indistinguishable from reality for non-graphics experts.
Photos, yes. Not video. It's an important distinction because the path to photorealistic video won't pass through impressive-looking still frames. Visual processing of movement is quite different.
Replaces "looks good" with a more generic "gets the right answer" and you'll notice that phrase repeated on any simulation context where computers are not powerful yet to work from first principles.
If you want a safe place to look, look at chemistry. They have simulations that vary in complexity by a huge number of orders of magnitude. You'll see those non-natural techniques that "look good" get progressively applied as the simulation gets slower and slower. It works really well, but once computers are fast enough, everybody just throws them by the window.
I do not agree with that. When I produce music of coarse I feel good with what I did and of coarse I enjoy listening to it. The art is what makes me happy. But god I would have loved to make money with my passion. Some people are passionated by numbers, some by art. In the end it's a product and if you want to make money with it it is fine.
I'm not saying that it's wrong to want to make money off of art. I'm happy for those who can.
I'm saying that if money is the dominant motivating factor in producing art, expect to be disappointed. Technology has reduced many costs of artistic production and has led to an oversupply of "content." To the cold, uncaring marketplace, you're effectively being compensated by those good feelings involved in the process of imagining, producing, and sharing art.
Even so, I draw and record music for my own enjoyment, and to collaborate with friends.