They churn out new parts and don't bring in fixes. See all the chips in their lineup that have a USB host controller. Every one of them (they use Synopsys IP) will fail with multiple LS devices through a hub.
We talked to our FAE about this and they have no plans to fix it. The bug has existed for years and the bad IP is being baked into all the new chips still.
Solution? Just use yet another chip for its host controller, and don't use a hub.
Thanks for the heads up. I have a design at fab that uses the H7's OctoSPI so this concerns me. I steered away from the memory mapped mode because it seemed too good to be true - wanted to be able to qsort() and put heaps in this extra space.
I suspect ST only ever tested it with their single PSRAM they intend this mode for.
My intent is to use indirect mode and manually poke the peripheral, though DMA will have to happen still.
Back on the PIC32MX platform there was a similar type of bug that doesn't exist anywhere else but to me: If any interrupt fires while the PMP peripheral is doing a DMA, there is a 1 in a million chance that it will silently drop 1 byte. Noticed this because all my accesses were 32bit (4 bytes) and broke horribly at the misalignment. The solution is to disable all interupts while doing DMA.
it is worse: i think they also did not test random access. I suspect their test was to: fill PSRAM linearly and then read it back and verify linearly. Random word accesses in unachached mode also randomly lose writes. I am unable to replicate quickly on purpose, only randomly, so i guess it is under 1/100mil so it is not in my list above. My workarounds avoid these crashes too though.
This is both true and false. While I work with Intel/Altera, Xilinx is basically the same.
That devboard is using recycled chips 100 percent. Their cost is almost nothing.
The kintex-7 part in question can probably be bought in volume quantities for around $190. Think 100kEAU.
This kind of price break comes with volume and is common with many other kinds of silicon besides FPGAs. Some product lines have more pricing pressure than others. For example, very popular MCUs may not get as wide of a price break. Some manufacturers price more fairly to distributors, some allow very large discounts.
That's mostly correct. It is as you say, except that shading and texturing come for free. You may be thinking of Playstation where you do indeed get decreased fillrate when texturing is on.
Now, if you enable 2cycle mode, the pipeline will recycle the pixel value back into the pipeline for a second stage, which is used for 2 texture lookups per pixel and some other blending options. Otherwise, the RDP is always outputting 1 pixel per clock at 62.5 mhz. (Though it will be frequently interrupted because of ram contention) There are faster drawing modes but they are for drawing rectangles, not triangles. It's been a long time since I've done benchmarks on the pipeline though.
You're exactly right that the UMA plus high latency murders it. It really does. Enable zbuffer? Now the poor RDP is thrashing read modify writes and you only get 8 pixel chunks at a time. Span caching is minimal. Simply using zbuf will torpedo your effective full rate by 20 to 40 percent. That's why stuff I wrote for it avoided using the zbuffer whenever possible.
The other bandwidth hog was enable anti aliasing. AA processing happened in 2 places: first in the triangle drawing pipeline, for inside polygon edges. Secondly, in the VI when the framebuffer gets displayed, it will apply smoothing to the exterior polygon edges based on coverage information stored in the pixels extra bits.
On average, you get a roughly 15 to 20 percent fillrate boost by turning both those off. If you run only at lowres, it's a bit less since more of your tender time is occupied by triangle setup.
Another example from that video was changing a trig function from a lookup table to an evaluated approximation improved performance because it uses less memory bandwidth.
Was the zbuffer in main memory? Ooof
What's interesting to me is that even Kaze's optimized stuff is around 8k triangles per frame at 30fps. The "accurate" microcode Nintendo shipped claimed about 100k triangles per second. Was that ever achieved, even in a tech demo?
There were many microcode versions and variants released over the years. IIRC one of the official figures was ~180k tri/sec.
I could draw a ~167,600 tri opaque model with all features (shaded, lit by three directional lights plus an ambient one, textured, Z-buffered, anti-aliased, one cycle), plus some large debug overlays (anti-aliased wireframes for text, 3D axes, Blender-style grid, almost fullscreen transparent planes & 32-vert rings) at 2 FPS/~424 ms per frame at 640x476@32bpp, 3 FPS/~331ms at 320x240@32bpp, 3 FPS/~309ms at 320x240@16bpp.
That'd be between around 400k to 540k tri/sec. Sounds weird, right ? But that's extrapolated straight from the CPU counter on real hardware and eyeballing, so it's hard to argue.
I assume the bottleneck at that point is the RSP processing all the geometry, a lot of them will be backface culled, and because of the sheer density at such a low resolution, comparatively most of them will be drawn in no time by the RDP. Or, y'know, the bandwidth. Haven't measured, sorry.
Performance depends on many variables, one of which is how the asset converter itself can optimise the draw calls. The one I used, a slight variant of objn64, prefers duplicating vertices just so it can fully load the cache in one DMA command (gSPVertex) while also maximising gSP2Triangle commands IIRC (check the source if curious). But there's no doubt many other ways of efficiently loading and drawing meshes, not to mention all the ways you could batch the scene graph for things more complex than a demo.
Anyways, the particular result above was with the low-precision F3DEX2 microcode (gspF3DLX2_Rej_fifo), it doubles the vertex cache size in DMEM from 32 to 64 entries, but removes the clipping code: polygons too close to the camera get trivially rejected. The other side effect with objn64 is that the larger vertex cache massively reduces the memory footprint (far less duplication): might've shaved off like 1 MB off the 4 MB compiled data.
Compared to the full precision F3DEX2, my comment said: `~1.25x faster. ~1.4x faster when maxing out the vertex cache.`.
All the microcodes I used have a 16 KB FIFO command buffer held in RDRAM (as opposed to the RSP's DMEM for XBUS microcodes). It goes like this if memory serves right:
1. CPU starts RSP graphics task with a given microcode and display list to interpret from RAM
2. RSP DMAs display list from RAM to DMEM and interprets it
3. RSP generates RDP commands into a FIFO in either RDRAM or DMEM
4. When output command buffer is full, it waits for the RDP to be ready and then asks it to execute the command buffer
5. The RDP reads the 64-bit commands via either the RDRAM or the cross-bus which is the 128-bit internal bus connecting them together, so it avoids RDRAM bus contention.
6. Once the RDP is done, go to step 2/3.
To quote the manual:
> The size of the internal buffer used for passing RDP commands is smaller with the XBUS microcode than with the normal FIFO microcode (around 1 Kbyte). As a result, when large OBJECTS (that take time for RDP graphics processing) are continuously rendered, the internal buffer fills up and the RSP halts until the internal buffer becomes free again. This creates a bottleneck and can also slow RSP calculations. Additionally, audio processing by the RSP cannot proceed in parallel with the RDP's graphics processing. Nevertheless, because I/O to RDRAM is smaller than with FIFO (around 1/2), this might be an effective way to counteract CPU/RDP slowdowns caused by competition on the RDRAM bus. So when using the XBUS microcode, please test a variety of combinations.
I'm glad someone found objn64 useful :) looking back it could've been optimized better but it was Good Enough when I wrote it. I think someone added png texture support at some point. I was going to add CI8 conversion, but never got around to it.
On the subject of XBUS vs FIFO, I trialled both in a demo I wrote with a variety of loads. Benchmarking revealed that over 3 minutes each method was under a second long or shorter. So in time messing with them I never found XBUS to help with contention. I'm sure in some specific application it might be a bit better than FIFO.
By the way, I used a 64k FIFO size, which is huge. I don't know if that gave me better results.
Oh, you're the MarshallH ? Thanks so much for everything you've done !
I'm just a nobody who wrote DotN64, and contributed a lil' bit to CEN64, PeterLemon's tests, etc.
For objn64, I don't think PNG was patched in. I only fixed a handful of things like a buffer overflow corrupting output by increasing the tmp_verts line buffer (so you can maximise the scale), making BMP header fields 4 bytes as `long` is platform-defined, bumping limits, etc. Didn't bother submitting patches since I thought nobody used it anymore, but I can still do it if anyone even cares.
Since I didn't have a flashcart to test with for the longest time, I couldn't really profile, but the current microcode setup seems to be more than fine.
Purely out of curiosity, as I now own a SC64, is the 64drive abandonware ? I tried reaching out via email a couple times since my 2018 order (receipt #1532132539), and I still don't know if it's even in the backlog or whether I could update the shipping address. You're also on Discord servers but I didn't want to be pushy.
I don't even mind if it never comes, I'd just like some closure. :p
The other Efinix chips are still sold as "has serdes" yet have not a single mention of it in the datasheets. At first I thought it was because they're still going through silicon qualification, but it's been 18 months and they're still TBD.
Being able to specify a generic part requirement instead of hunting for a specific part is nice sometimes, but any company that makes more than a couple boards already has this covered with a bom management system.
Adding parts via text is nice and fast, but also glosses over many aspects of the part. Say I use a Diodes Inc buck regulator. It has a valid input voltage range it will accept. It has multiple ways it can be wired depending on the application. Wire for buck, septic, etc. PFM on/auto/off, etc. I don't see control over details like that.
Your About Us page is longer and more extensive than your actual product example page.
I see 4-5 very basic designs that I could bang out in Altium in under a day each. Are you selling to people unable to make PCBs? I look for ways to save time because I wear many hats in my job, only 1 of which is doing a board. However, I would not be able to save any time using this tool, because it would produce an inferior result. Additionally, after only 8 months of paying for this product, someone can already afford a full Altium license.
I want to save time on stuff like breaking out and pin swapping IO on a large FPGA. Handle DDR3 routing for me. These are things that actually take time, because you need to understand the device and read through tons of PDFs. However, I think that might also be the most difficult part to add to your product.
Finally, how does it handle physical constraints like non- square board outlines, mounting hole placement, and 3d STEP integration?
Good set of questions, and thanks for the feedback. Currently updating the website to add this stuff.
Comments/answers to questions in order:
1. Yes, finding a single resistor is easy. Part solving is more about making parametric circuits possible. (like a buck regulator that can configure itself on SS, output voltage). There you need to calculate values from parameters and then go find a part you can actually buy. Doing that manually is less magical of an experience.
3. Our customers are professional EEs doing really complex work (e.g. many 30Ghz SERDES), and our website is drastically out of date. Will update design on site soon - this post was some guy submitting our landing page to HN. I'm trying to keep up!
4. Yes, this works already. Was very hard to do on the product side, design code was relatively simple. DDR3 example here - https://youtu.be/bw4KxhV-d8g
5. Physical constraints are imported or generated - works quite well to automatically sync electromechanicals. Mutliple 3D models including STEP can be added to parts, again I need to update the website.
TBH, a lot of the details are fairly standardized across vendors, and/or are discoverable through SFDP. Parsing the SFDP tables would take a nontrivial amount of program memory, though; I don't know as I'd want to embed that logic in ROM.