As beautiful as this demo is, I think the push for realism has harmed the game industry more than it has move it forward. One of the few developers in recent memory that has managed to focus on both visual fidelity and gameplay is Bioware with the Mass Effect series; for the most part most developers aren't this successful and end up producing beautiful but lifeless, dull games.
Also consider that producing art assets for an engine like this is a huge undertaking and adds a substantial cost to development. Publishers are less likely to take risks with innovative concepts given film like budgets, increasing the tendancy to stick to formulas that they know offer a good ROI. The "hollywoodisation" of the games industry hasn't had much to offer me as a gamer, and increasingly I find more enjoyment with wee indie offerings like Minecraft over the AAA titles.
I think another great exception is the elder scrolls series by bethesda. Every time I see new physics engine, better AI, or better graphics, I think, 'What can Bethesda do with this?'
I agree with Bonch here: Oblivion is one of the worst examples of this very trend: it was stunningly beautiful (especially if you had a monster machine), but for that it traded animation (crappy, especially the faces), scenario, originality, gameplay (though I'm sure the consoles objectives helped there), …
Apart from the graphics, the shipped game was lifeless and uninteresting, Morrowind was orders of magnitude more original, more interesting and (even though it was on a half-dead continent) more full of life than Oblivion.
You think Besthesda is an exception? Oblivion was pretty crippled in features and gameplay compared to Morrowind and Daggerfall despite having physics and "realistic" faces. The AI was embarrassing. Bethesda is one of the main culprits in trading gameplay for graphics as well as latching onto "next-gen" marketing hype.
Indeed, I see Besthesda as a not-so-shining example of the case that squidsoup is making. Morrowind was brilliant - it had great freatures, great gameplay, a great storyline etc.
Then Oblivion came along and they switched their development focus (or at least a fair amount of it) onto the graphics. And the result? Poor gameplay with a cop-out 'auto levelling' system, a 'fast travel' system which was 'click this button to travel anywhere in the game World', etc.
Sure it might have looked pretty and thus sold well, but it was a step backwards in terms of gameplay.
Crysis 1 is another example of this. Amazing graphics, amazing physics. But the storyline was non-existant and at times the gameplay was poor.
Totally agree. Oblivion was 80% hype and pretty graphics.
Beneath the eye candy, Oblivion was an extremely boring game really. Sure, the world was huge, but it was empty and repetitive. Each cave looked like every other cave[1]. Each dungeon, castle and fort looked the exact same[1]. Every tomb, crypt or ruin looked the exact same[1]. Each oblivion level looked the exact same[1]. The rest of the world was more or less filler to link these places together. Also, sometimes I found an interesting area only to be disappointed that there was no history, no reason for it being there, nothing. (Eg, if I find a ruin waaaay out in the middle of the mountains, I wonder why its there, why is it a ruin, what happened to the people etc etc - Oblivion made no attempt at answering any of these questions).
Besides that, each quest was written as if it was the only quest in the game (ie, the world may as well be paused until you finish the quest) and, besides the hype about the AI, everything revolves around the main character - if you stop and sit on the side of the road for three days, the world is pretty much paused until you do stuff again (besides trivial NPC schedules).
Having said that, the world was quite beautiful and some of the quests did interest me (eg, the assassin quest where you get locked into a house), but overall, the game was dull, lifeless and, besides its grand size and scale, empty.
[1] On the inside. Sometimes there would be a really cool looking (from the outside) structure that got me excited, only to be shown the exact same interior level that I'd seen a hundred times already yet another time.
I feel like we need a "library" of freely available hi-res textures and models that can be adapted for a given game. From what I understand, a huge part of the development costs go into the art, and having a huge library to draw from seems like it would mitigate that somewhat.
It's not that simple. There is more to a game artist's job than just creating a generic model and texture. The art has to be in a specific art style, it has to be within a certain budget, it has to be made in a certain way to work in the game engine. The textures have to be made to work with the shaders that it will be using and look good in the game's lighting system. Physics might need to be setup. Destruction states. Lightmaps generated. It might need to be created so that it can be rigged and animated. There are places like cgtextures.com for textures, and there are artists who turn those textures textures into the art you see in the game. You can't replace the game artist.
Believe me, I'm not trying to replace game artists or belittle their work in any way. I'm just musing about whether or not there is a way to streamline an artist's job, to make it easier for them to create content. Which, when phrased that way, seems like it is more an issue with the available tools rather then the library of previous works.
Most companies have lots of in-house tools and scripts they have written to help artists do their job. A company like Bungie will have people dedicated to working on tools for artists. The positions are called tech artist or tools programmer. Some companies have much better tools than others.
Can anyone in the game industry comment on this? Why type of problems are game companies paying for? I have seen at least one company that sells procedurally generated trees and am curious how good of an idea this is. Being customer oriented, what are the problems game companies have today?
I guess you talk about SpeedTree. We used in some of our games, but not in our latest.
One of the problem of using middleware, especially one generating content is how to integrate it with the rest of the pipeline.
If our pipeline sees that as just another model/texture like the rest of the provided, then no problem - e.g. An artist uses the tool to generate the geometry. But if we have to put some control over it, and integrate it deeper so far into the rendering system, then other problems arise:
- Speed of rendering
- Memory usage
- Collision support
- Can it reuse existing shader techniques, or new are required.
- others
And since you've talked about procedural - this would entail more variations:
For example if this is an multiplayer game you would have to synchronize the tree generation to look exactly the same for each player, and have the same collision box, or collision model (usually low-poly bounding convex or set of). This means you would have to communicate synchronize this with the clients.
Then some animatiors might've adjusted certain cut-scene to look very good, but the TREE has changed and now it's weird.
For an open world RPG, or strategy game this might not be a problem - more variation there for the better.
But all in all, if the price is right, and artists and coders agree - why not?
>For example if this is an multiplayer game you would have to synchronize the tree generation to look exactly the same for each player, and have the same collision box, or collision model (usually low-poly bounding convex or set of). This means you would have to communicate synchronize this with the clients.
They just have to use identical seed values for their random number generators.
Look at dwarf fortress: a few worldgen parameters and a single number are all you need to generate a fantasy world with complete with procedurally generated geography, population with traceable lineage, governments and other entities, and history, including artifacts and artwork that portray said history.
I'm not in the industry at all, but I've followed it for a long time and done a good deal of random game dev. The biggest pain point is how much it costs to produce a large world these days. As it stands, this is not a technical problem, but a business one: artists are expensive and art for modern games is insanely detailed.
Procedural generation is, IMO, the way to go here, but it's an unsolved problem. I've been thinking about this for a long time and I've had two specific ideas in mind for how to ease the cost of content development. Please, steal these!
Idea one is for the creation of individual assets. You start with what is, in effect, a block of clay. You (programmatically) cut away at it, apply materials to it (which could, in turn, do things like cut a wood grain into it), add on to it, etc. So let's say you want to build a sign generator for shops in your town. You'd start with a block of clay roughly the size of your sign, cut it into the (2d) shape you want, emboss the logo into it, then apply a wood material to the sign, and paint the raised areas. This allows you to quickly create unique assets in your world.
Idea two is for the high-level creation of a world. You have an editor where you can create a landscape. You start by setting the size of the area, drag points up and down to affect terrain, etc -- all stuff that's been done before. The key, though, is in world brushes. For instance, you select a 'forest' brush. This brush, when painted onto an area, creates trees, rocks, bushes, etc as appropriate based on elevation, angle of the ground, etc. So you paint part of your area with a forest brush, then go in and paint in a river, paint in paths, etc. Once you've created the high level look, you can go in and add features like signs to cities, unique quest/story-related assets, etc. This allows you to create large areas with completely unique geometry very quickly and cheaply, without sacrificing quality; after all, you're starting off with unique objects and then adding specific touches.
The latter idea has been implemented to a certain extent in the past, but it's been entirely via instanced geometry. The problem there? Everything looks the same. Procedural generation lets you get a unique world, then you can go in and actually make it feel real.
You bring good points, but you forget one things - artists love control. Take that away from them and they cry like babies who lost their candy.
Procedural art is really cool if you can think about your asset as a programmer - but you need to remember artists do not think like us. If you are willing to replace your artists with tech artists though, that's a different story (note: normally those people are rare, and stay in one field programming shaders).
"The problem there? Everything looks the same." -- Yes, that's exactly the desired consequence by artists, and as mentioned, will cry if their flexibility is revoked by tools they are not familiar with (in house model/texture generators).
While you as a programmer think of a texture of "noise and color", artists think about it as brush strokes, materials and emotions.
Procedural generation doesn't automatically mean that the artist has no control. There will still be inputs guiding the generated content. Artists will still have the final say as to what will get into the game.
You have to make the GUI artist-friendly though, inputting some numbers and hitting a button won't do. The tree generation company (SpeedTree) is actually a good example. Their program is very easy to work with even for artists and you can generate a large scala of realistic-looking trees.
Also, with procedural generation you can achieve a more varied look. For example: If artists have to design every tree, you get a few specimens that you place everywhere. With automatic generation, every tree can be different,
"Procedural generation doesn't automatically mean that the artist has no control" - It means they have less control, and they always bitch and moan about that.
Sure, some people will always bitch and moan about change. But look at it positively: it allows them to do more impressive things in less time. If they can make a good-looking forest in a day instead of three months, it means they can do more interesting things then hand-design trees.
The second idea is also already being done by software like Terragen from Planetside Software: http://www.planetside.co.uk/ I don't know how widely used this is; I have the impression lots of game companies build similar tools in-house, tailored to their particular games.
I'm peripherally in the industry, but even I can say middleware is huge. Speedtree, as you mentioned, shows how niche the suites can be. You also have FMV players (especially on mobile) and of course physics engines are huge. Look for Havok and you'll see it powers most AAA games. I don't know specific pain points you can ease, but if you can generalize it, game companies will probably pay for it.
Also, middleware has to be a node on the outside of the graph. It has to substantially all of a problem, not just most of it.
So a fully functional physics engine, a complete procedural tree generator, a self-contained front-end system, a reasonably complete animation package, that's fine, that's great stuff.
Sixty percent of a solution plus an API is not okay.
A complete solution to a problem (no matter how niche) with the smallest integration surface is key.
potatolicious mentioned animation systems http://news.ycombinator.com/item?id=2303453 Is this an area that is just not very important (wont buy) or one that just is low on the list of todo items? speedtree begs the question of what about speedFOOBAR such as buildings, cities, caves, worlds, fish or how about speedanimal (generating animal + animation)? Is the big place to find a nitch in Art or are there better places to invest such as physics engines?
What about audio? Is that an area that is uncharted or an area that is low on the list of items?
If you want to know what sorts of libraries the game industry wants to license, take a look at RAD Game Tools, they are the masters of this stuff. Some of the stuff they are currently focused on are large data-set compression/decompression (Charles Bloom) and I would guess some sort of tool/library to generically handle sparse virtual texture rendering (Sean Barrett) which is similar to id's megatexture technology... though the SVT guess is more speculation on my part than public knowledge.
In any case, this market is probably not a great one to get into unless you are a deep subject matter expert in one of these areas (or you want to become one and have a lot of time to spend doing so), the bar is set pretty high already and few game development studios are willing to put any effort into exploring solutions that aren't either well-proven or from people they already know and trust (the game industry is still rather incestuous).
Spot on, I think; I would add that as well as being a subject matter expert, you will also need to be intimately familiar with the hardware architectures of the Xbox 360 and PS3, since 'optimized' solutions for these platforms will need to be one of your selling points.
Problem is how to handle worst-case scenarios. With pre-rendered movies that might or might not be a problem, and even if it is - it's production one (e.g. you can't predict how many well for how many days you would render your movie)
But for a game - this means dropping the frame rate, making it visually unappealing.
That's why Ray-casting would never pick up for real-time games (Unreal most definitely is not doing any ray-casting, I'm just giving it as a an extreme example).
The other problem is content creation. Current games are 7-15gb of compressed data - half of them textures, rest models, animations, etc. and fitting that in memory (and loading it in memory).
Even with the fastest drives, you end up spending a lot of time loading (or streaming).
Then if that thing looks so real, you start feeling that something ain't right if your gun can't really destroy every piece of a building... And later allowing that in the engine, makes it worse as not much pre-processed data can be reused (static lighting, bsp geometry, etc.). Or you do it (somehow) real-time.
Also this complicates the game "AI" - there is no real AI in an FPS shooter - it's simply too god damn hard. Just think about the cover system, and how laughable it would be if you have destructible surfaces, and you can see the "AI" guy trying to hide behind thinking it was a fine cover point.
Less realism, more constrained world, and better gameplay are the key components to good gaming... Not fancy graphics all the way :)
> That's why Ray-casting would never pick up for real-time games (Unreal most definitely is not doing any ray-casting, I'm just giving it as a an extreme example).
I'm not convinced that you are correct here. Although ray-tracing has not been a good option in the past due to processor limitations, we're reaching the point at which it may become feasible. Because ray-tracing scales linearly with the number of cores you use, modern CPUs are fairly well equipped to work with it, albeit at sub-optimal framerates. See the following (from 2007) for reference: http://www.q4rt.de/
Outside of this, consider that the only way vector graphics are allowed to perform at their current level is due to the existence of discrete graphics cards which are optimized for the necessary matrix calculations. If we were to develop discrete cards for ray-tracing, we could potentially see amazing results. For example, in 2009 IBM developed a computer that could run full-scene ray-tracing at 1080p averaging 90 FPS. While I do not know how complex said scene was, I'm optimistic about the future of ray-tracing.
Just because it scales with the number of cores, it does not mean it would keep fps in reasonable range given random camera position and orientation.
Take a look at Intel's raytracing demo, and when you go close to the lamps (shiny bulby spherical objects with lots of reflections) you get a lot of slowdown (using same amount of cpus).
That's what I'm talking about. Yes you can tweak it, but it loses it's purpose.
If every pixel takes too long you get nothing; plus you wasted computing power to find out.
A different solution is to cast a sparser set of rays as things slow down, and then use compressed sensing (which has a runtime complexity independent of scene complexity) to integrate the full scene. Then as the scene gets more complex, you get what (visually) amounts to a heavily compressed jpeg.
I mean no offence but I think the term you are looking for is raytracing not raycasting. Raycasting is kind of a stilted version of raytracing and was used in Wolfenstein3D and those old 3D games from the DOS days.
Content will have to be increasingly procedurally generated. I worry less about the practical size of the content - to be frank, I think games will likely trend towards being rendered in whole or in part remotely, with consoles being dumb clients - than the manpower and budget required to create it.
I lament constriction - I haven't bought any of the latter games in the Call of Duty series owing to how constrained and linear they are. Games like Far Cry or Just Cause 2 are more my style; but these sandbox games tend to gain from more freedom, rather than lose.
Re cover: "all" the AI needs to do is have a desired target location such that the AI's bounding box is fully or partially contained in volumes that are not in the player's line of sight (i.e. in shadow imagining the character's head as a light source). I don't think that's an insurmountable problem.
I think a really strong FPS AI would be a technical achievement similar to automatic car driving; it has have a "real eyesight" algorithm, notice motion, understand varied terrain, etc. A basic LOS check is insufficient since there's lighting, smoke, semi-transparent items like fences etc. included in typical FPS environments.
Even considering that the data structures for the world and rendering can be reused for AI purposes, it's still pushing our current hardware to expect to get a whole squad of really detailed, smart AIs and also have a playable action game with modern graphics quality.
It's a lot easier, for the purpose of making a novel, amusing game, to just back off from realism and find other things to try.
it's still pushing our current hardware to expect to get a whole squad of really detailed, smart AIs and also have a playable action game with modern graphics quality
I don't buy this for a moment. On multicore PCs, graphics horsepower is almost completely separate from CPU horsepower available for AI. Modern games rarely escape the 15..40% CPU utilization envelope on my machine (i7 920); there is bags and bags of concurrent room. Memory usage is barely noticeable (very rarely more than 2GB as most games are 32-bit, meanwhile I have another 10GB twiddling its thumbs). Meanwhile, it's the GPU that is normally limiting framerate, but only because I'm pushing up anti-aliasing, anisotropic filtering and shadow quality.
I think consoles are the bigger issue: games are no longer designed to take advantage of modern PC hardware. Instead, they're developed with consoles in mind, with graphics parameters that can be turned up on the PC. But gameplay and AI doesn't get nearly the same tunable love as graphics - and in some ways it shouldn't as it would change the nature of the game - but it remains that modern PC hardware, outside of straight-line single-threaded performance and the GPU, is not particularly taxed by modern games.
If those spare cycles can be put to good use, that actually better serve gamers' needs, it seems that consoles would be disrupted.
I guess a big issue is install base: lots of consoles, and few people with expensive, upmarket PCs. Also, piracy. And a single hardware target. And perhaps console game quality is "good enough" for most gamers. Perhaps having many people to play FPS against is more important than graphics (or AI).
Even worse, as silicon gets cheaper, instead of those PCs becoming affordable, I predict most people will buy even cheaper PCs (or phones+HDMI+kb; or iPads; or a new ARM-based PC form-factor). i.e. PCs getting disrupted, as workstations, minicomputers and mainframes were.
Worst case: it's possible that consoles will never improve from where they are now.
The only hope I see is that someone comes up with something sensational, that everybody wants, that requires that extra processing power. Life-like AI might do it... but we already have real people to play against. Though I think cinematic graphics is probably the best bet.
A fuzzy LOS algorithm may work: have a "vision" value, when the LOS algorithm encounters an "object", add the objects opaqueness to the vision, if the vision is below a threshold, continue the ray, otherwise stop. Any objects of interest it calculates are made "fuzzy" based on its vision value. Also increase the vision value with distance, darkness, fog etc. That way, partially obscured, distant or dark objects have a higher vision value and are therefore treated as fuzzy, meaning the AI has only partial information on it (depending on how fuzzy).
Now distant and obscured objects are visible to the AI, but it won't know everything about them and could choose to ignore them, investigate or take some other intelligent action.
It's not infeasible, obviously you could ray trace a fairly simple scene in real time with modest hardware. The question is, can ray tracing ever produce a better result than rasterization in the same amount of time? The answer at the moment is no, and probably will be for the foreseeable future. Even in high end film work, ray tracing is used sparingly where needed.
Too bad all these beautiful effects will be covered up by the foreshortened perspective of a gun in one corner and a health bar in the other corner.
I kid, but honestly, seeing this doesn't excite me anywhere near as much as a preview of whatever the team that made Mirror's Edge has in store next for current-generation consoles.
DICE made Mirror's Edge, but it was a commercial flop, so we are unlikely to see a sequel anytime soon :( Personally I loved it.
Slightly OT but interesting: from what I hear from people at EA, it was a huge internal political war that year between the people who wanted to build more sequels and licensed franchises, and the people who thought that EA would die if it didn't innovate on its own IP. The latter won and were given a chance to prove their worth - the two main titles to come out of that were Dead Space and Mirror's Edge. From what I hear management considered this direction a flop (Dead Space, while popular, and spawning a sequel, was not the sort of hyper-blockbuster it needed to be, and ME was an unquestionable flop) and now EA is culturally back to the sequel-mill mentality.
A sad opportunity that didn't pan out :( There are precious few new IPs being worked on at EA right now.
A lot of the presentation techniques from DICE's upcoming Battlefield 3 borrow very strongly from Mirror's Edge. I was watching a shakeycam video of B3 from last week's GDC and the influence was very apparent.
"[...] a lot of the physicality of Battlefield 3 comes from the thinking behind Mirror's Edge. So the fact that your hands are a part of the world—it's not just a gun on a stick—it's actually a character that moves around. You can see your feet, you can see your hands, you can touch stuff, you can interact with the world. A lot of thinking comes from Mirror's Edge and that's what you want."
The economics of the games industry mean that you can't turn a profit on AAA games without sequels. The new IP push was about reducing EA's dependence on sports titles and externally-licensed IP. (Also, to a degree, an attempt to reinvigorate EA's corporate culture) It was always in the game plan that the successful new IP games would receive plenty of sequels.
I wouldn't say it was a total failure. Mass Effect and Dragon Age are pretty good. =)
This is very true. The first title establishes a buzz about the game, the second (and third, etc.) game often make more money because the brand has been established and demand created. There are a large number of people who don't want to spend the whole $60 when a new IP is released and will instead wait for it to be discounted, or buy it used.
EA circa 2003/2004 was pumping out sequels yearly for titles that should have long been put to pasture. This has changed quite a bit in the last 5 or 6 years with IP's like Skate, Mass Effect, Dragon Age, Battlefield (and Bad Company), Dead Space, where the sequels aren't rushed out the door without much thought, not to mention other new projects like Dante's Inferno and Mirror's Edge.
You kid--but it's the sad reality of most modern games (or at least the ones that generate the most press). Rather than just putting a new "pretty face" on the same old shooter concept that's been the norm for the past 15 or so years, I'd like to see game companies get back to innovative gameplay concepts. Interestingly, I see far more of that innovation coming from the indy publishers who don't do anything in the realm of "crazy 3d graphics" than I see coming from studios churning out "DOOM clone v 29345.2"
Yeah, unfortunately as you ramp up on art assets to get to the level of "crazy 3d graphics", the people holding the money tend to want more and more of a sure thing. 'DOOM clone v 29345.2' will inevitably sell boatloads. 'Toasty the toaster & the raspberry jam rebellion', not so much.
Which is why Braid and Minecraft did so well, even though they were largely programmed by single individuals instead of large teams. They took risks that the big studios wouldn't take.
True, but for every Braid, there were how many hundreds or thousands of failures? You can't blame the guys writing the huge cheques these days to expect likely success.
That being said, if they were smart, they'd be setting up some skunkworks projects on relatively small budgets....which I'm not aware of.
Did you know that Mirror's Edge is built using the current version of Unreal Engine? Tech demos are built to showcase as many features as possible, in a very limited time. In order to show shadows/lighting they are often dark.
I get the impression (albeit as somebody with nothing to do with the industry) that studios aren't very excited about another generation of hardware because it already takes a huge amount of resources to make a current-generation blockbuster, and profit is not alway guaranteed. The demo is completely amazing and I loved it, but if it "took about three months for 12 programmers and artists to build" this 2 minute scene, surely this is beyond the limit of what a studio will undertake for a full length production? Or am I underestimating the horsepower of the studios today?
It's safe to say that there's little appetite for a new console generation right now.
As in the movie industry, the middle ground in the games industry has been disappearing. Your best shot at turning a profit is either to make small bets on mobile/indie games, or go full-bore for a AAA title with a budget in the double or triple digit millions. Indies don't need another generation of consoles; AAA developers don't want to rebuild all the tech they created for the 360/PS3.
The assumption at the start of the current generation was that if you invested in your tech fairly early on, you'd be able to get a trilogy out of it before the subsequent generation of consoles made the tech obsolete. (This is why so many 360/PS3 IPs were structured as trilogies) Now that it's obvious the current consoles aren't going away anytime soon, developers are looking to amortize their tech over a few more titles before the hardware changes again.
I think the best way to make sense of that stat is to compare how long previous tech demos took to produce. Anyone got the data? (like the Rage demo perhaps)
Not to put too fine a "trolling point" on it--but after playing Final Fantasy XIII I'm inclined for game developers to take a step back in terms of their graphics aspirations to continue to deliver quality games. It's been speculated (and supposedly admitted to--though being at work I don't have time to dig up a link) that the incredibly linear world of FFXIII was a direct result of being overambitious with the graphics. There just wasn't time to build a more "complete" world to the graphical standard they set.
I agree with much else of what is said here about it being a "perfect conditions" demo that doesn't have to deal with any unknowns--and I also echo that it looks exceedingly cool. All that said, great graphics does not a great game make!
Final Fantasy 13's failings were mostly due to a horribly dysfunctional studio environment and a broken development process. They released a surprisingly honest postmortem of the project in a recent issue of Game Developer (you can find a summary of it at http://www.gamasutra.com/view/news/30640/Exclusive_Behind_Th... ), and 'overambitious with the graphics' pales in comparison to the other things that went wrong. My favorite tidbit: They went through a large portion of the project with the game not in a playable state until they realized they needed to build a demo, and only then did they finally get the game playable and start testing it.
Perhaps one might say that developers are aiming too high in every respect (including graphics), and that's leading to flawed projects. But in this specific case, I doubt the game would have turned out good even if the graphics weren't given the effort they got. In many cases it's possible for a studio to easily scale up development when it comes to graphics, because with strong enough art direction, you can have 50 or 100 artists working on assets for different parts of a game and have a reasonable chance of tying all that art together at the end. Unfortunately, engineering teams don't scale nearly that well...
Thanks for the link, interesting read. Reminds me of some similar comments made by Warren Spector after Deus Ex - that it was a big wakeup call to them once they had a single playable level, because you could suddenly see that some aspects of the game didn't work as they were.
Possibly this suggests pg's minimum viable product + iteration strategy as being just as applicable to game development too, although I imagine the "minimum viable product" for FFXIII is a hugely significant amount of work in itself.
For FF you have a number of different game "engines". The fighting engine for example can be a stand alone component, same goes for the running around in the world / track. There is always a minimum viable product.
On the other hand, you have studios like Bethesda - who build great games with incredible gameplay... but graphics performance so sluggish that it really takes away from the experience, not to mention buggy and crashy as hell.
There's room on both sides to improve. IMHO Valve is the best at this - their games run on an incredible range of hardware and look good on just about anything. Their engine is also rock solid to boot.
My point was only that there are many ways to fail when achieving "graphical perfection" is included among your goals for a game. Performance sucks, the world is shrunken, gameplay and story suffer; these are just examples. As always, "something's gotta give" and when you decide it can't be graphics, things that are far more core to the game experience end up on the chopping block.
Realtime graphics are finally blurring into stylized film. I expect in another ten years or so we will, for all intents and purposes, have the horsepower to render photorealistic scenes. The bigger problem may be finding ways to create this content without insane amounts of work. Cloth mechanics, particle/hair systems, realtime physics, constrained IK/semirigid body solvers, fractal elaboration, fluid dynamics, procedural character/animation generation... all of these problems are going to be really interesting as we start hitting the limits of what humans can imagine and express to a computer.
It seems like the more horsepower that these consoles/video cards have, the more opportunity exists to create tools that create part of the graphical experience. IE a toolset that handles creating realtime dynamic forests or cities or hair etc
Yes, the skin rendering was very nice, and the lighting was great. Which brought into sharp relief how little progress has been made on procedural character animation. When the welder-guy walked across the roof and stopped on the edge, you could see a painfully clear walking loop and outro animation back to the standing position -- very familiar from the earlier Unreal engines (except for the residual swinging of the arms). It is jarring to see such mechanical movements in a demo where the quality of the graphics is so realistic. Epic should use Euphoria or some other engine for procedural body movements. The act of walking to the ledge of a roof to inspect a fight below shouldn't look like any other kind of walk.
Agreed. There's a distinct lack of R&D in what is IMHO more important than graphical fidelity - conveying emotion and motion in games. Valve made waves with its (honestly somewhat primitive) facial animation system, and not much has been done with it until now (LA Noire has done something really cool with it and is coming out soonish). It's amazing how much effort we'd spend crafting the perfect artillery shell explosion but your CO yelling orders at you moves like a mannequin.
We've made a lot of gameplay progress into action-RPGs (Mass Effect and the like), and I for one would like to play a game where facial expressions and body language actually mean something (e.g., the character is lying, but instead of smacking you over the head with it, the game can be subtle about it).
Admittedly, this is a tech demo which was designed to pack in the maximum amount of variation into the shortest possible time frame. Game development typically reuses the same carefully-crafted assets (particle systems, models, textures, shaders, even level geometry) in as many places as possible.
Yes this was rendered with the engine, but having the same camera angles, people, etc. you can do lots of tricks that you can't when doing interactive stuff ... ahem GAMES:
- You can cull your geometry offline (you know where you camera goes in and out)
- You can prefetch certain calculations that are to come soon (you know what's gonna happen)
- For sure you can implement correct motion blur filter as you know where you are moving
- Probably no physics - they might aswel prerecord them.
- In fact you can capture where each vertex, color, texture channel moves, and just replay it.
- And probably many more.
I'm not saying it's not cool, and I love good cutscenes in games (Metal Gear Solid), but people seem to confuse that
CUTSCENE RENDERED is the same as GAME RENDERED. No it's not.
Wow I could actually read the emotions on the guy's face and it didn't feel too uncanny to me at least.
But calling this real-time I think is stretching things a bit, even if its using the game engine and not prerendered. The demo and assets were probably polished and optimized for that machine, angle, lighting and scenario. And it was heavily scripted. I think an engine flexible enough for more general gameplay - with more calculations, interactions, assets and erm unmasked NPCs would be a lot less smooth and certainly less realistic looking. This seems like one of those theoretical optimums with requirements that just don't line up for most practical purposes.
Not trying to downplay how cool this is though. It will still be and is a visually and computationally amazing feat.
I remember seeing pictures around 2004/2005 of what new games would supposedly look like on the old 'next gen' (read current gen) consoles. The actual games were nowhere near the hype (games like Madden and FIFA). I seriously doubt that the next 'next gen' consoles will have graphics like this for the reasons you outline. I'd love to be wrong though. You could easily imagine this technology be used in movies - I thought Jeff Bridges' digitally altered younger version in Tron Legacy had such a case of the "uncanny valley" going on that it detracted from every scene he was in.
Why is everybody complaining about how long it took for Epic to create this demo? This is a next generation engine so the team directly working on the demo was probably working concurrently with development. They probably encountered multiple bugs with the engine every day.
This engine isn't meant for release today, it's meant for release in a few years. Any pronouncements you make about how this engine is too X for today's Y are going to be invalid.
It's going to be a while before there's another console generation. (In a sense, the Kinect and PS Move were an attempt to refresh the consoles without changing the base hardware) I haven't even heard rumors of new console development yet, so I'd guess they're at least 2-3 years out.
Well when did we first start hearing news about the PSes 1-3, relative to their release dates? The PS3 was announced on May 16, 2005 (wow, that's so long ago!) and was released in Japan on November 11, 2006. That means that if a PS4 will be coming out on schedule, we should be hearing news very soon.
Wasn't the PS2 also a '10 year console'? Perhaps there was no official announcement that it was a 10 year console, but it technically nearly was/is! (The PS2 was released in 2000 and was still going on in 2009!)
IIRC, Sony was banking on the PS3 being a 10+ year console. This could all change with the discovery of the PS3's private key however. Of course if that's the case, the PS4 might just be a PS3.5 with a new private key.
What that means is that they'll keep selling the PS3 for 10 years, just like they sold the PSone and PS2 for 10 years. It has nothing to do with the PS4 schedule.
I've long been a supporter of the idea that gameplay comes before graphics. Though it's certainly neat to see a demo looking so dazzling, it really doesn't say all that much to me about the future of the games industry. It's long been clear that over time games will continue to look progressively better.
This reason is why I was actually so happy to see Nintendo abandon this pursuit of ever-prettier graphics in the hopes that gamers would be drawn in by the innovative gameplay ideas that propelled the Wii to the top of the console heap this generation. Though the Wii has somewhat fallen short on my expectations, I am still impressed by Nintendo's decision.
This generation has been the death knell for countless (up-til-now) successful companies. The higher development costs of making prettier and prettier games has meant that a single flop can tank the company. It's exactly because of this that I believe we are seeing the rise of mobile and social games as they are cheap not only for the consumer to pick up, but also for the developer to produce in the first place.
To give a bit of historical context, this is the very same company that put out such classic shareware hits as Jill of the Jungle, Jazz Jackrabbit, and my personal favorite One Must Fall: 2097.
Other person knows One Must Fall! I played that for months with friends on a 386.
It is easy to grouch about how gameplay rules over shiny effects, but I am quite impressed and uplifted that the company that made those games is still around and on the cutting edge of games.
Why do they always have to do these shots at night? Is it because the processing to render far away objects in the background is just too much for the engine and they have to compensate by making these things at night so the background behind the characters is black or something close to them?
It's actually the other way around. Night shots are harder to pull off well since the effects of individual light sources are much more pronounced. In this video, they are trying to showcase a number of different light sources, reflections, diffusion which require a dark setting to stand out. Outdoor day shots get most of their "character" from a spot source at infinite distance (parallel rays) which is much less computationally demanding.
The problem with these tech demos is exactly that--they're just tech. In-game graphics will never look this good because of non-cinematic camera angles, behind the shoulder viewpoint, and HUD elements mucking it up. It's nice looking, for sure, but I can get that from a Pixar movie.
They probably mean the 6-core i7-9* series, distinguished from "regular old i7" by their Gulftown architecture and 50% higher transistor count. But yeah, I thought that term died a while back.
Sony and MSFT didn't do that well on this generation.
> Collectively, that means that 2011, on the console side, is going to be relatively drab. Microsoft doesn't need a new console, Nintendo doesn't want to compete with the 3DS launch, and Sony can't afford a new console. So it seems like the only points of interest this year will be price cuts--certainly, Sony and Nintendo will have them.
There are no plans for a future generation, aside from Nintendo, which _might_ have a full HD console. (Please correct me if I'm wrong).
> The demo ran on a PC with an Intel Core i9 microprocessor with three Nvidia GeForce 580 GTX graphics cards connected through SLI technology. The demo took about three months for 12 programmers and artists to build.
It's on a huge, hot, expensive PC now, how would the big 3 recoup their investment in such a beast? Wouldn't they be better off trying to sell portable units with cheaper to produce games?
I would be surprised if the next Xbox didn't represent a big leap in rendering performance. Microsoft has a tendency to throw its near-infinite resources to brute-force problems.
I, however, am not sure whether Sony will do the same or learn from the lesson Nintendo taught: that gameplay is more important then rendering performance. The fact Cell is more or less a dead end doesn't help it much. The chip was too expensive to develop, is too expensive to use and devilishly hard to program effectively.
Higher rendering performance is inevitable - just like almost every netbook on the market is a multi-threaded 64-bit beast. I question whether every console maker will emphasize it by the same measure.
On a serious note - take every tech demo with a trunkfull of salt. The infamous Unreal3 demo, from years ago, hasn't really delivered on the promise some 6 years later. Tech demo != game.
This may be an unfashionable point to raise, but as the line continues to blur between CG and live action, we need to check the ethics of the content we're producing.
When I play a racing game for several hours, I have to check myself before I get behind the wheel of a real car.
What does it do to the mind of a twelve year old to not just see hundreds of graphic acts of violence, but to control them over and over again?
Too bad for the downvotes. Even though we like to pretend this isn't an issue, this point really needs to be addressed. When graphics become a mirror image of reality what does that do for our subconscious to be continually acting out violent acts on real-looking people? I know we like to deny it, but there is a desensitizing effect that violent images has on our subconscious. True to life graphics will probably make the effect even stronger. Where do we draw the line? Should there be a line?
I appreciate the comment. I think the current kneejerk reaction against violence in video games is uninformed and ideological, but that doesn't mean we should never consider the consequences of technology as it advances.
Is a game of GTA on the PS3 going to turn a rational adult into a criminal? Of course not.
Is there a difference between the developing brain of a child immersed in a simulation, and a grownup playing a game? I don't think that's unreasonable to ask.
Yes, thanks. To clarify, I'm asking whether the creators of content should have any responsibility for the way in which it will be consumed.
There's no black and white line that I would draw. Parents ARE responsible for their children's upbringing.
Does that relieve the rest of us of all responsibility for the society we create?
I don't believe that playing GTA or watching Saw XVIII is going to make anyone a murderer. That doesn't mean I think we should be racing to the bottom, trying to see how far we can take things.
To be clear, I consider these issues ones that people should solve for themselves, not ones for the government to get involved in. As far as I'm concerned, any censorship or forced labeling of content is a violation of the First Amendment. It's not the government's job to police, but it should be our responsibility to consider.
It's not only unfashionable, it's downright wrong. Studies have shown there is no link between "entertainment" (like comic books, movies, and games) and real-life behavior.
I'd like to see some of these studies you cite. I'd wager you're drawing a far stronger conclusion than the studies warrant. Even a very tiny effect, while inconsequential in an individual, can have an impact on the scale of an entire culture.
I'm not talking about reading a violent comic book, or watching a movie, or even playing one of the current generation of video games. Please re-read what I wrote.
My question is about whether we should consider ethics as technology advances. If we reach the point that we can create realistic simulations, and then put children into those simulations, do we have any responsibility for the content of those simulations?
Please don't be so ideological about the issue. It should never be wrong to question things.
Goes almost without saying that Epic develops great game engines; however, IMHO, Crytek's cryengine at GDC was more impressive and has done a better job of rendering outdoor environments and foliage better than the Epic's Unreal engine; Unreal still has top spot in market share but I think Crytek will make significant gains in the next couple of years;
This has been true for over 5 years (far cry came out in 2004...). And yet, CryEngine has basically only been used in other Crytek titles. If destructible trees couldn't sell game engines, theres little new hotness in CryEngine 3 that will. I bet in another 5 years, we will be saying pretty much the same thing.
It looks incredible, as you'd expect. I wonder how this will translate to better gameplay though, or will resources be diverted to the 'shiny shiny make it all better' factor?
Finally a use for 3D. But I'd love to see some eye tracking (and some way to determine each eye's focal length) to determine what we're focusing on -- Tron gave me a headache.
But, a virtual blade runner in proper 3d would be a little fantastic.
This is why I've always been a huge fan of Unreal Technology. You could definitely call me a fanboy since I played the original Unreal (the game on which UT was based) avidly for years... but for good reason. Just like most males my age, I've played tons of first person shooters... but I've always felt Unreal was way ahead of its time, not just with graphics but with gameplay as well. Although the gameplay of the Unreal series has steadily declined since the original but these days it's still hard to find a game on its level in the sense of how many different ways can be approached to be good at the game.
But back to the graphics, rendering these graphics in realtime with a high framerate is definitely possible... but not presently feasible. The only problem I've noticed with the Unreal series and its graphics is as I just said: it's ahead of its time. Sure, there's hardware out there that can handle it no problem... but the average consumer can't afford it. I have a feeling that even when the next generation of consoles is released, they'll still struggle to handle the Unreal Engine at its full potential. But I guess someone has to raise the bar.
I remember when UT3 came out... I was so stoked. My gaming PC was probably only a year behind the top (affordable) components at the time... and I wasn't about to go spend $500+ on a new video card, ram, and all that. I figured I'd be able to run it decently enough... but boy was I wrong. I got 30 fps tops (and that was when nothing was going on!) and if you've played an FPS like Unreal competitively, you'd know that the only way you stand a chance at dodging rockets and sniping people midair and all that awesome stuff possible with the Unreal series... you need upwards of 60 fps, consistently. So I only played the new game for a couple of weeks before giving up. It's probably for the best too because I needed to concentrate on my studies haha... but the moral of the story is the entire game basically died within a few months because only a small handful of people could run it.
Note that these aren't graphics from a next-gen console. It's just a tech demo of what Epic would like next-gen consoles to be able to support.
Personally, I think next-gen consoles are a ways off. Consumers aren't that interested in another console generation. Good visuals are already a solved problem with today's hardware, and people are exploring new, simpler types of gaming on the web and mobile devices.
Basically, the 90s obsession with pushing for better graphics is long gone. It's more about art style now.
Also consider that producing art assets for an engine like this is a huge undertaking and adds a substantial cost to development. Publishers are less likely to take risks with innovative concepts given film like budgets, increasing the tendancy to stick to formulas that they know offer a good ROI. The "hollywoodisation" of the games industry hasn't had much to offer me as a gamer, and increasingly I find more enjoyment with wee indie offerings like Minecraft over the AAA titles.