> - Move to single-voltage power supplies at 36-57 volts.
Why? And why not 12V? Please be specific in your answers.
> - Get rid of the "expansion card" and switch to twinax ribbon interconnects.
If you want that, it's available right now. Look for a product known as "PCI Express Riser Cable". Given that the "row of slots to slot in stiff cards" makes for nicely-standardized cases and card installation procedures that are fairly easy to understand, I'm sceptical that ditching slots and moving to riser cables for everything would be a benefit.
> - Kill SATA. It's over.
I disagree, but whatever. If you just want to reduce the number of ports on the board, mandate Mini SAS HD ports that are wired into a U.2 controller that can break each port out into four (or more) SATA connectors. This will give folks who want it very fast storage, but also allow the option to attach SATA storage.
> - Use USB-C connectors for both power and data for internal peripherals like disks.
God no. USB-C connectors are fragile as all hell and easy to mishandle. I hate those stupid little almost-a-wafer blades.
> - Standardize on a couple sizes of "expansion socket" instead...
What do you mean? I'm having trouble envisioning how any "expansion socket" would work well with today's highly-variably-sized expansion cards. (I'm thinking especially of graphics accelerator cards of today and the recent past, which come in a very large array of sizes.)
> - Redesign cases to be effectively a single ginormous heatsink with mounting sockets...
See my questions to the previous quote above. I currently don't see how this would work.
Graphics cards have finally converged on all using about the same small size for the PCB. The only thing varying is the size of the heatsink, and due to the inappropriate nature of the current legacy form factor (which was optimized for large PCBs) the heatsinks grow along the wrong dimension and are louder and less effective than they should be.
> Graphics cards have finally converged on all using about the same small size for the PCB.
I guess you're not specifying tolerances here, so it'll be difficult to have a real conversation... but I've been watching a bunch of teardowns of video cards that have been released in the past few years, and I'm seeing substantial difference in PCB sizes and component placement location.
Even putting that aside, OP hasn't provided any details on what exactly they were proposing... which is a damn shame, as I was interested in hearing them.
> Why? And why not 12V? Please be specific in your answers.
Higher voltages improve transmission efficiency, in particularly for connectors, as long as sufficient insulation is easy to maintain. Datacenters are looking at 48V for a reason.
Nothing comes for free though, and it makes for slightly more work for the various buck converters.
> God no. USB-C connectors are fragile as all hell and easy to mishandle. I hate those stupid little almost-a-wafer blades.
They are numerous orders of magnitude more rugged than any internal connector you've used - most of them are only designed to handle insertion a handful of times (sometime connectors even only work once!), vs. ten thousand times for the USB-C connector. In that sense, a locking USB-C connector would be quite superior.
... on that single metric. It would be ridiculously overcomplicated, driving up part costs when a trivial and stupidly cheap connector can do the job sufficiently. Having to run off 48V to push 240W and have no further power budget at all also increase complexity, cost and add limitations.
USB-C is meant for end-user things where everything has to be crammed into the same, tiny connector, where it does great.
> Higher voltages improve transmission efficiency, in particularly for connectors, as long as sufficient insulation is easy to maintain. Datacenters are looking at 48V for a reason.
What datacenters are doing has little bearing on what's worth the replacement and reengineering cost in a SOHO environment. Datacenters have multiple orders of magnitude more power they need to deal with than a SOHO user.
Also, given the long-standing existence of the 80 Plus program, I'm also pretty suspicious of claims that switching to HVDC inside the computer chassis will have an effect on a end-user's power bill that's not washed out by the increased expense of the replacement components. Will it? Do you have the numbers?
> (sometime connectors even only work once!)
I've been building PCs for decades. While I totally believe that there are such connectors, I've never been so unlucky as to encounter one.
> They are numerous orders of magnitude more rugged than any internal connector you've used.
Some? Sure. Any? Hell no. For example: You cannot honestly claim that a bigass ATX power connector is less resistant to shear force than the tiny wafer that is the USB-C connector. I've had many USB-C cables (and some ports!) fail at the connector end due to stresses that ATX cables handle just fine.
But yeah, I totally agree that -say- SATA connectors, or "freestanding" front-panel pin connectors are far more vulnerable to shear and torque than USB-C connectors.
I figured the USB-C connector would be the most controversial of the idea list. Certainly there might be better options, but I was interested in trying to harmonize on something that could radically cut connector diversity and drive up fungibility and flexibility. I was imagining that the power half of it would be USB-PD PPS+EPR from the motherboard to the peripheral.
The idea is that USB-C would be the "small widget" connector for things like large form factor disks (not m.2), optical drives (as much as those still exist), etc. Heavy iron things would use the bladed connector for power and the twinax ribbon for data. PCIe is great.
It would also not sadden me tremendously to have CPU+RAM be a single device, be it on the CPU substrate or as a system-on-module. That flavor of upgradeability isn't worth losing a ton of performance. I'm sure tons of folks would hate it.
> I was interested in trying to harmonize on something that could radically cut connector diversity and drive up fungibility and flexibility.
Have you run the numbers on cost for USB-C plugs, ports, and cabling vs. what you're wanting to replace? My strong expectation is that you're proposing something that's substantially more expensive than what it's replacing.
It's also a plan that places one or more cables with electrically-conductive ends into a chassis with components that react... poorly to unplanned contact with electrically-conductive materials. Do remember that PC internals have been designed to work with cables that have been carefully designed to have electrically non-conductive ends when not plugged in to their intended socket.
I asked you a bunch of other questions that I'd legit like the answers to. I wouldn't have bothered asking the questions if I wasn't interested in the answers.
Honestly, the idea of upgradeable RAM was purely tied to it being low-speed, high-cost devices. This in turn drives the ridiculous markup some companies charge now, drives up system costs due to complexity, and harms performance.
Having a desktop CPU with integrated 64-128GB of RAM would probably justify the lack of "upgradability" with sheer performance and power improvements. There was never much upgradability there anyway - you could start by underspec'ing and then later fill it up to the (surprisingly not that high) limit, but what you saved by not buying all the RAM up front is probably less than the extra cost for supporting it at all. Paying more than the cost of RAM to have the flexibility of not buying RAM.
(Phones have 24GB of RAM nowadays, a PC should not have any less than 64GB of RAM as base tier.)
Why? And why not 12V? Please be specific in your answers.
> - Get rid of the "expansion card" and switch to twinax ribbon interconnects.
If you want that, it's available right now. Look for a product known as "PCI Express Riser Cable". Given that the "row of slots to slot in stiff cards" makes for nicely-standardized cases and card installation procedures that are fairly easy to understand, I'm sceptical that ditching slots and moving to riser cables for everything would be a benefit.
> - Kill SATA. It's over.
I disagree, but whatever. If you just want to reduce the number of ports on the board, mandate Mini SAS HD ports that are wired into a U.2 controller that can break each port out into four (or more) SATA connectors. This will give folks who want it very fast storage, but also allow the option to attach SATA storage.
> - Use USB-C connectors for both power and data for internal peripherals like disks.
God no. USB-C connectors are fragile as all hell and easy to mishandle. I hate those stupid little almost-a-wafer blades.
> - Standardize on a couple sizes of "expansion socket" instead...
What do you mean? I'm having trouble envisioning how any "expansion socket" would work well with today's highly-variably-sized expansion cards. (I'm thinking especially of graphics accelerator cards of today and the recent past, which come in a very large array of sizes.)
> - Redesign cases to be effectively a single ginormous heatsink with mounting sockets...
See my questions to the previous quote above. I currently don't see how this would work.