Hacker News new | past | comments | ask | show | jobs | submit login
Open-Source Graphics Processor (kickstarter.com)
214 points by dossantos on Oct 9, 2013 | hide | past | favorite | 75 comments



As someone with a chip design background:

- prices seem reasonable for doing this as a commercial project

- badly written intro that assumes far too much

- seems to be based on 2000-era technology

- not actually delivering hardware, so very limited practical use

- the community that demands open hardware on principle is a lot smaller than they think, so this probably won't get funded

But we're not used to kickstarting software projects at normal rates, we're more used to founders working for free to prototype something and using kickstarter to push it into production. This misses that last (even more expensive) stage.


Agreed, this is both too little and too much in the same Kickstarter.

So there is a pretty nice FPGA + Dual ARM9 chip, the Zyng7020 which is used in the Parallela and the Zedboard[1]. This has a pretty decent amount of FPGA fabric, and it could use an open source frame buffer to go along with the core. One of the hugely annoying things about Xilinx parts is that their design flow is insanely over complex.

That said, the comments here and elsewhere point out another interesting dichotomy. There are people who want a fully open Tegra4, and there are people who just want a frame buffer they can program. The latter is an undergraduate FPGA project, the former is a multi-million dollar team effort.

When the two groups talk about what needs to be done there are heated arguments of 'too much' or 'too little'.

These folks seem to have a design already "done" (in the sense that they have used it for some customers), I'm wondering if a more effective path might be to team up with Xilinx or Altera to put together some IP that that can be part of their design suites for free that would enable non-corporations to implement a decent frame buffer. That would just be a port of what they've got and perhaps some design notes around it. Get Digilent to build a demo board and poof, your good to go.

[1] www.zedboard.org


I have a Parallella from another KickStarter, with a Zynq-7020 included. It would be interesting to use the Epiphany-16 co-processor as well to evaluate GPU concepts and code.

[1] http://www.kickstarter.com/projects/adapteva/parallella-a-su...


>- seems to be based on 2000-era technology

Could you elaborate?


The goal is to produce a re-implementation of the #9 Ticket to Ride IV card (https://en.wikipedia.org/wiki/Number_Nine_Visual_Technology) which is a fixed-function graphics card released in 1998. As they say in the first paragraph, "The reason behind this was to provide a binary compatible graphics core for vertical markets: Medical Imaging, Military, Industrial, and Server products."

This is not something that will be remotely competitive with modern GPUs. Their $1,000,000 stretch goal would be to implement a modern shader engine, but given that they aren't offering any actual hardware, I'm not sure how they hope to get to that point.


You've got it. The $200k level gives you a 2D graphics card from 1998. It looks like their original target market was people who want an embedded emulation of this sort of ancient system so they can run DOS, Windows 95, or ancient Linux/X11 on it. Markets in which changing the software is horribly expensive due to compliance costs.

The $400k goal targets Direct3D 8. That appears to date to about 2000: http://www.gamedev.net/page/resources/_/technical/directx-an...

(I'm impressed that article is still up from 2000!)

$1m is not totally unreasonable for the shader-based design at commercial rates. Last time I did a back-of-the-envelope calculation for a shader-capable fully open graphics card delivering working silicon, I came up with two years, $2.5m, and a hiring shortlist.


I missed the 'history' paragraph. I remember the glorious #9 days. But this is so old I'm thinking it would be easier to have virtual driver translation. Even with overhead a phone gpu could outperform easily.


I guess he meant that until the latest goal, it's a fixed rendering pipeline gpu.

ps: instead of parallel compute units for vertex/pixel shader.

ps2: I'd like to see an open hardware RISC movement for GPU. I wish for a 'simple' parallel array of visual oriented floating point primitives, very small and regular, but very open so that people can write compilers and drivers with ease. As opposed to very potent but obscure gpu followed with crippled drivers. Just enough to get fast yet low power css/dom like systems (with little bonus like path rendering) to help lift the visual work away from RPI-class SoCs.


Which struck me as totally backwards. With embedded multipliers available in nearly all levels of FPGAs, I think an OpenCL/HSAIL target would be a more generic solution that could enable many of the hardcoded features in software.


Ha, you summarized what I was trying to express while I was editing. Completely agree.


If you want a large set of general purpose RISC cores, check out Tilera and Netronome.

The Raspberry Pi GPU is actually very powerful, but not in ways that help you with CSS/DOM; I don't think there's much work been done to accelerate layout in 3D accelerators other than at the very basic level of glyph rendering and image uncompression.


Actually the rpi case is fueling my desire. I know that the VideoCore part of the SoC is very potent but it's encapsulated beneath too many layers (especially considering the goal of the rpi, I would love to have simple access to all the gpu).



One word: thanks.


If you look at the demo video at the end, you can see that they are using WordPerfect on Win 98 or Win 2k. I thought this was some kind of joke. But apparently not. So, I believe it when fosap says 2000-era technology.


I might be wrong, but it looks like this card (at the 300,000 level) has already been designed and used in production. On the one hand, that's a lot more confidence in the fact that they can deliver. On the other hand, what's the initial 300k for if the project is already done?

The higher tiers look more interesting, but I don't know how commercially interesting they are. The Kickstarter is so dry, I'm assuming they're targeting researchers, enthusiasts or people who would otherwise spend a lot of time making their own GPU. There's no reason given on the page why I should be interested in helping them, as someone who is only vaguely knowledgable in the area. It seems really specific, and I can't tell if those people are out there, or they're just asking for money from whoever wanders by.

Finally, there's the licensing. If there is a commercial application for this, they might make a ton of money off a 10K tier for a non-GPL license.


I don't understand the criticism. I've done a lot of work in real time image processing with FPGA's. This work, as I like to put it, takes "cubic hours". It is complex, expensive, requires real expertise and non-trivial development time. To have someone with twenty years in the field launch a project that will result in open sourcing the relevant technologies is nothing less than fantastic. Yes, it takes as long as the project originator requires, even longer.

This isn't web development. Sorry.

I really don't get the criticism about the lack of a board as part of the KS rewards. If you are doing FPGA development at this level, the cost of fabricating a board is a rounding error when compared to R&D costs.


I think they are approaching the marketing from a wrong direction. Instead of targeting PC/opensource enthusiasts, they should be targeting the growing "maker" segment. Have a FPGA board as a backer reward, and demo it using a microcontroller. That way all the limitations spawning from a 15 year old GPU core do not matter that much when there aren't really many competitors.


There are existing projects to squirt out a VGA and talk to a microcontroller. Most have an "interesting" reputation for bugs.

At the really low end a sufficiently hacked up PIC (even the older ones) can bit bang out NTSC composite B/W video, crazy as that probably sounds. You can compute during the retrace intervals. So thats the $2 market.

One of the most successful / popular / bug free uses of the Propeller chip is as a graphics coprocessor outputting analog VGA, it does quite a good job of that. The electronics for the prop-vga connection are some peculiar value resistors, and a VGA jack, little else. So thats the $10 market.

If you want something more advanced than a prop as a graphics copro, you use a $25 rasp-pi.

Much above $25 and you're looking at little embedded PCs/home theater PCs. Look at what the mythtv guys are using for frontends, which over the last decade or so has gone from pretty exotic to boring COTS, for a couple hundred $.

So its a market that's fuller than you know and has had historical issues.

To some extent you run into problems with "maker" class processors such as memory limitations. If you want monochrome full VGA and your arduino only has 2 K of ram or whatever it is (its probably 2 ords magnitude too low whatever the exact spec is) you're not going to be doing a simple frame buffer. And if you hack a rasp-pi into being your graphic copro, then other than I/O limitations there seems to be no point in making an arduino the "main" processor if the rasp-pi is doing all the work.


The Raspberry Pi is basically a GPU with an ARM in a tiny corner of the die. However, the GPU side of its behaviour is mostly Broadcom trade secrets. This produces outrage from some people who demand a completely open platform. However, I don't think there's enough demand money to produce one; and you may run into IP licensing issues, e.g. if you want to include an MPEG accelerator.


I'm confused, what do you mean "demo it using a microcontroller?"


I think most people will go, great where can i buy the card... oh. The rewards are just the source code. I don't see how this can reach the first goal.


The thing is, this isn't really interesting as a card to slap in your desktop PC. I think the market is people who do embedded projects and want something more open than an Adreno or Tegra chip. Those guys will either put this on an FPGA for prototyping, or layout a board for their specific needs and use an ASIC. Either way, the KS itself can't really ship a hardware part to help them.


I agree that endusers won't be too happy with this projects, but companies might be, if they need a (not too powerful) video core.


If you're a company you can already license plenty of GPU IPs for use in your SoC (in RTL form or otherwise).

If you look at the various tiers of this kickstarter you can see you'll have to go up to 600k$ to fund a proper ARM SoC 3D IP with AXI support. And since it's a kickstarter you have absolutely no way of knowing what the end product will look like.

I can't imagine any company betting that much money for RTL code with absolutely no warranty about what the end result will look like and if it's going to even fit in your SoC.

And the code being open source is actually not an advantage since it means all other constructors can then integrate the same GPU in their designs for free.

If it gets funded and it ends up working correctly I'm sure many companies will be interested to try and use it in their designs but I really doubt they'd fund it.


Yes, this is not going to work without a card as a reward.


The problem is making an ASIC and going to production will cost much, much, much more than 200k$. At that price you pay for one batch of MPW.

The only way they could do that is if they knew they could market their GPU to a broader audience and sell many units. The problem is I don't see who could be interested in buying a low-perf stand-alone PCI GPU which may very well end up costing more than a TegraII due to the small number of units.


They could ship an FPGA card, although backers would probably get sticker shock at paying ~$1,000 for a 1998 graphics card.


I wish people wouldn't use the LGPL for "hardware" without a legal opinion from the FSF. I mean what is "linking" for verilog code? - it does not fit the model. The intentions are fine but the mechanisms need work.


I don't know that the FSF has to say but I can easily imagine what the LGPL would mean in the context of hardware IPs: if you modify the core of the IP in any way you have to share the code. However the rest of the hardware that interacts through the documented interface or ports of the IP (that would be the "linked" part) is not subject to the license.

Modern ASICs are built around blocks connected by a system bus so it's actually quite a clear cut.


As I said, the intentions are clear but the license is inappropriate. Licenses, especially GPL style ones need to get the details right. What if you change the interfaces, how does that affect it, or if you change the process. can you draw the line where you like. And thats an LGPL style license. I have come across people using the GPL, what is the boundary of that system. The LGPL has all sorts of provisions specific to basically C libraries.


Is there a comparable license that achieves one of the goals of LGPL, that changes to the library get published? I doubt many open hardware publishers really have relinking in mind.


Not a good one that I know of. See this article for discussion of some issues http://arstechnica.com/uncategorized/2007/02/8911/ mentions issues of what is "distribution".


Based on what little I do know about GPUs, it seems to me that they are aiming for a hardware-fixed pipeline as a first step, and want to move towards general shader based architecture only as a very-very stretched goal. Am I right? If I see it right what is the reason for that? Couldn't they just jump to a modern architecture, instead of following the evolution of the industry? Excuse me for my ignorance, it's possible that even my assumptions are wrong.


Because they are sitting on an already mostly-complete version covering most of the first two goals.


How will this be different than http://en.wikipedia.org/wiki/Open_Graphics_Project which had some promise and then faded into the sunset?


The Open Graphics Project only did 3D and VGA, this one also does 2D which is probably more useful for embedded applications.

It was also a more complicated architecture than it needed to be IMHO due to splitting functions between two FPGAs, one for the graphics bit and the other for the bus interface.


Well, rather than showing "promise" I'd say it just shows massive willful ignorance to the situation.


As someone who has done a fair amount of OpenGL programming, I can honestly say that it's becoming a convoluted spec. It's no longer elegant or something that should be aspired to IMHO, YMMV. The principles are generally sound, just not the implementation or API.

I would rather see an FPGA being used as a general purpose DSP with the ability to run shader code in software, perhaps inside a runtime written with Go or Erlang. I've thought about doing a kickstarter to emulate, say, a 256 core processor with an FPGA. The main problem is that the last time I wrote VHDL was in 1999 and I've become too accustomed to writing mainstream code which mostly involves putting out fires.

I think a highly parallel multiprocessor like this to make a break with the past and explore more scalable approaches like ray tracing would be good for the world. And by ray tracing I mean "not rasterization". I realize that RT has its faults and that there are many other approaches that do things like soft lighting/shadows and depth of field but they are difficult to explore now because processors are still effectively single threaded.


Or you could buy a 72 core processor: http://www.tilera.com/products/processors/TILE-Gx_Family

(it's not as powerful as it sounds; memory bandwidth ends up being a major constraint, and programming all those cores effectively is hard in itself)


Ya I was excited about the Xeon Phi too:

http://www.intel.com/content/www/us/en/processors/xeon/xeon-...

I just don't know if anything from Intel will be cost-effective because it's going to be over-engineered to not compete with existing products.

For example caching is largely useless with something like Go that is sending copies of data around. I don't know if it's possible to use copy-on-write with so many cores.

It's just a hunch but I think multiprocessing in the future is going to use something like content addressable storage and not worry so much about a complex router or interconnect. Only the most naive algorithms will probably win out, so basically chop the screen up into a bunch of 16x16 squares and send them to each processor.

Also I think it will be really awesome to be free of middleware and be able to run physics or AI directly. I've even thought about trying to write something to emulate a bunch of cores on my computer so I can at least play with the algorithms until affordable hardware arrives.


Intel tried exactly that with Larrabee. Didn't work out.


Interesting idea - rather than supporting OpenGL and Direct3D, I'd like to see a card that abandons them in favor of a better thought-out API (i.e. general purpose calculation on arbitrarily-sized buffers with arbitrary dimensions and an arbitrary amount of "channels" (or vector/matrix components)). Regardless, I backed it.


But then it doesn't run any of your existing software and has a tiny market, competing with the various vector processors and FPGAs.


Something like Larrabee?


Very nice, and much needed part for a truly free and open source devices such as laptops and media centers.

As a side note however, I have one large complaint about the kickstarter video. It lacks enthusiasm, and it almost look like the speaker dreads the camera.


Isn't it the case that, for GPUs, the two biggest limitations are process improvement, and driver development? One I have a GPU shader core, I can copy and paste until I run out of power transistors, interconnects, and bus bandwidth. But without the process to drive that equation forward, I'm stuck in the '90s, and without GL (and ideally D3D) drivers, I can run no software.

This seems to have no answer to either that matters to normal people.

If a Linux desktop, phone, or tablet, with at least "integrated graphics" level performance, could be produced as open source, that might actually be interesting. But nothing here looks like that's going to happen.


Am I misunderstanding, or are the lower goals basically just a bounty? $200k to "polish" the code they already have? Sounds to me more like they are asking $200k to open-source code they already have.


That seems to be exactly correct, is that a problem?


I don't know, is it? I thought Kickstarter was supposed to be for funding a project either currently in or soon to be in development, not making a group buy.


I see this as a "get enough money together and we'll OSS it" effort. AFAIR this has happened for some game engine years ago, but I can't recall anything specific about that one.


If they are designing something on an fpga that conforms to openGl they are never going to get the same performance that IC's get implementing the same specification. What's the point?


Late am i, thinking, do i want to see them sinking? Eternal reinvention of wheels, how dumb it feels? The Milky Mists are already there, have DDR2-Interfaces too, what more could one want for an open core?

http://en.wikipedia.org/wiki/Milkymist

Case closed.


OK I'm missing this completely. What problem is this solving ?

I can't believe they have never seen a smartphone recently, so they know what cheapo hardware is capable of pushing nowadays. Yet this exists.

So what is it that is exiting here but fly high over my head ?


Acquire any number of FPGA dev boards with a video output. They're not expensive. Then go to opencores.org and browse freely available cores for seemingly everything ... except 3d graphics cores. Theres quite a few implementations of crude VGA framebuffer out, this is a stereotypical Uni-level FPGA class lab exercise (its not terribly hard). There are some opencore projects trying to accelerate graphics but not as much as you'd think. And there's nothing like the thing in the kickstarter.

I hope the kickstarter project is Wishbone compatible so it can talk to any CPU core I'd like to synth and they're not just making a computer card or whatever, and I hope it ends up license compatible with opencores.

The lack of accelerated graphics is a kind of a weird hole in free FPGA tech, everything else you can imagine seems available.

So for example if I wanted to make an ethernet connected stepper motor controlling FPGA based CNC machine controller, which is not so far fetched, every major software component is off the shelf at opencores except for the software I'd write for the synthesized CPU and ... accelerated graphics if I want a 3-d pix of what the machine is doing / done / plan to do. All the rest, CPU cores, ethernet, stepper drivers, PS/2 drivers for keyboard mouse, all thats off the shelf. It would of course be cheaper to use a COTS desktop PC or a rasp-pi but the point is if I wanted to use a FPGA I almost could except for no 3-d graphics.


There is a project [1] on opencores.org that probably should be in the "Video Controller" category, I haven't tried using it yet though.

[1] http://opencores.org/project,orsoc_graphics_accelerator


The demand from a comparitively small number of people for the hardware to be open.


There is a market for hardware that has gone out of production. Consider a hospital with a million dollar work of scanner which was the latest tech on 2000. Today due to the effect of bath tub curve some parts are failing. (e.g. GPU in this case) the original silicon manufacturer has either gone out of business or stopped production. The hospital now has the option of either putting down the money for a new scanner or buying a third party replacement GPU(pin and functionality compatible drop-in) at a premium (What would have cost 10-20$ can be sold for 1k-10k$) We do not need to fabricate a Y2K technology on the latest 16nm/28nm process. you can shop around to find the cheapest available process and use it.


Rather see a ray-tracing GPU.


Looks like a really cool idea!

I'm curious how the framerate could be improved?

Would the reason it's kind of laggy at the higher resolutions be due to the max clock speed of the FPGA or other factors like the number of logic units?


Nice move. But some more details are missing. No where they mentioned about the GLSL compiler for shaders. Are they providing it or we need to develop our own?


Only the $1M strech goal would even support shaders, the basic 3D option is just a fixed pipeline.

And way down at the bottom it says: >Software drivers are a challenge, and we will work on providing some level of drivers, with the hopes that the community takes them up and pushes them to new levels and provides problem reports to us.


It's a fixed-function hardware, so no shaders and no GLSL compilers.


If they plan to release it as LGPL, do they mean to dual-license it too? Do they see a future revenue stream in its adoption?


It looks like they already have a commercial version, which they've already sold to various customers.

If I had to guess, it'd be that these guys really want to open source it, but don't have the revenue stream to do so. So the kickstarter is to see if the idealists will put their money where their mouth is (so that these guys can buy food and have a roof over their heads while working on opening up the hardware).


Is there a reason why the demo video looks like the cutting edge in 1993? Maybe I'm missing something...


Because they are demoing it with a benchmark suite from 1999/2000 (you see WinBench 99 and WinBench 2000 in some segments). Presumably because the hardware can't handle more modern stuff. Heck, it seems to be struggling with demos they are showing.


Thank you. That's the answer I was looking for. I had figured they had written their own demo software to show off the capabilities, but a benchmark makes a lot more sense.

Though now that I know they're using benchmarks from last century (and struggling), I'm even more confused about why this is exciting.


> Though now that I know they're using benchmarks from last century (and struggling), I'm even more confused about why this is exciting.

It's exciting because it's open hardware.


Because the video card is made by a small startup (not much info in their web site, just mentions 2 founders) rather than multi-billion nVidia/Intel?


What's up with people about open source X everything?


It's nice to be able to fix or contribute to every element of anything you use for one.


I guess my question is how does that fit into our current insurance policy and liability laws? For example, with so many things claim to be going to open source, if someone uses a 3d printer to make a screw to fix a car, and an accident occurs, what does it mean to the victim, insurance company, auto company and you?

Does open source actually innovate in areas like making cars? Does closed-source, private industry competition generate better innovation. Are there correlation and/or causation in either approach with new innovation? In other words, maybe it's nice to open discuss ideas but does actual implementation matters if it weren't open source? With software, we certainly have really terrible open source projects that people will use just because it's handy (I am looking at you pycrypto), but in the end we have to build another one. Whereas proprietary software may include custom "pycrypto" library that has private cryptanalysis done on it. It is still safe and useful.


this looks bad ass!


its cool




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: