Very cool! The more I read about early computing, the more fascinated I become that the major problem innovators needed to solve was memory, not computing per-se. There were precious few things that could be teased into working as read-write memory at electronic computer speeds before semiconductor memory, and they were all monstrously expensive (vacuum tube flip flops, Williams tubes, core memory) or nasty and dangerous (mercury delay lines). Core, despite its expense, was a major breakthrough in that it was solid state and therefore much less error prone.
I can't find the reference now, but I remember reading about some early computer engineers (possibly on Whirlind?) contemplating the use of a microwave retransmitting station as a form of memory, essentially creating a "mercury" delay line with microwaves in the atmosphere.
This is how desperate people were for anything even remotely affordable which could be tortured to behave somewhat like memory! No wonder Intel made a bundle when they started offering chip memory. The stuff it was replacing was just totally inadequate for the purposes many people wanted to put it to.
The more I read about early computing, the more fascinated I become that the major problem innovators needed to solve was memory, not computing per-se.
Exactly. Electronic arithmetic hardware predates WWII. IBM had an electronic multiplier working. ENIAC was a giant plugboard machine. It's not that people didn't think of stored-program computers before Turing. It's that there was nothing in which to store the program.
IBM built machines with plugboard memory. Relay memory. Electromechanical memory. Punched-card memory. Punched tape memory. Drum memory. Look at the history of the IBM 600 series machines, a long battle to get work done cost-effectively with very limited memory.
Delay line memory was sequential and slow. Willams tubes were insanely expensive per bit. Core memory was a million dollars a megabyte until the early 1970s and didn't get much cheaper. There was plated wire memory, thin film memory, and various ways to build manually updated ROMs. All expensive.
Then came semiconductor IC memory (1024 bits in one package!) and things started to move.
"Mercury was used because the acoustic impedance of mercury is almost exactly the same as that of the piezoelectric quartz crystals; this minimized the energy loss and the echoes when the signal was transmitted from crystal to medium and back again. The high speed of sound in mercury (1450 m/s) meant that the time needed to wait for a pulse to arrive at the receiving end was less than it would have been with a slower medium, such as air (343.2 m/s), but it also meant that the total number of pulses that could be stored in any reasonably sized column of mercury was limited. Other technical drawbacks of mercury included its weight, its cost, and its toxicity. Moreover, to get the acoustic impedances to match as closely as possible, the mercury had to be kept at a constant temperature. The system heated the mercury to a uniform above-room temperature setting of 40 °C (104 °F), which made servicing the tubes hot and uncomfortable work. (Alan Turing proposed the use of gin as an ultrasonic delay medium, claiming that it had the necessary acoustic properties.)"
Also, they're not "loops" in the physical sense, though they do operate logically as loops: pulses go in one end of the tube, travel through the mercury, and are received at the other end, where they are repeated elecrtonically back to the transducer at the starting end. "Reading" a bit means waiting for the moment when it is hitting the pickup end, and pulling it out of the loop at the same time as repeating to the transmitting end. "Writing" means waiting for the same moment in the "loop" and replacing the output of the pickup transducer with a signal that represents the data you wish to write.
Thus, there is an inherent tension between making the tubes longer (more storage per tube) versus keeping them short (lower access times). This tension is inherent to all forms of "delay" memory, including the wire torsion memory grandparent mentions.
They are mentioned in Turing's Cathedral [0], an excellent book that was recommended here on HN. Here's an interesting quote:
There were two sources of noise: external noise, from stray electromagnetic fields; and internal noise, caused by leakage of electrons when reading from or writing to adjacent spots. External noise could, for the most part, be shielded against, and internal noise was controlled by monitoring the "read-around ratio" of individual tubes and trying to avoid running codes that revisited adjacent memory locations too frequently -- an unwelcome complication to programmers at the time. The Williams tubes were a lot like Julian Bigelow's old Austin. "They worked, but they were the devil to keep working" Bigelow said.
This phenomenon is very similar to the recently discovered row hammer vulnerability of DRAM memory, except it predates it by roughly 65 years.
I learned about Turing Machines as a CS student a long time ago, and got the false impression that he was only involved in theoretical pursuits (theory of computation, algorithms for cryptography, etc.). It was only after reading this book that I learned how much he had contributed to the design of actual computing hardware.
> trying to avoid running codes that revisited adjacent memory locations too frequently
And today, it's exactly the opposite: we try to write code that has as much locality of reference as possible so that we can avoid expensive cache misses.
Can anyone explain why the letters and characters in the grid are ordered they way they are?
A-Z is non sequential, I’m guessing it’s something to do with making character selection logic simpler, but by looking at the letters I couldn’t come up with a definite pattern or rule.
Very cool, especially the "SEM with a fixed target" analogy. I've often wondered how the old SAGE terminals handled character generation, and this post fills in the blanks nicely.
I'm usually too scared to work with anything over 120V- especially since parts on the schematic linked require a 1000V power supply to work. Very neat, but too dangerous for the average home-gamer like myself to be inspired enough to go out and build.
It can be worked safely if you follow necessary precautions, but unless you are familiar with those, better follow the disclaimer and "do not try this at home".
The main issue is that capacitors keep their charge after unplugging, so you might have some that are still dangerously charged.
I remember when opening old CRT TVs they had a big notice inside about discharging them before servicing some parts of the circuit.
A good GFCI, safety equipment and respect go a long way.
Especially the first one saved me more than a dozen times at this point. Yay for required-by-law GFCI on the house-level.
I also do recommend to get some Schuko (CEE 7/3 and CEE7/4) or UK plugs and sockets, even if you're in the US just so you can avoid the safety nightmare that is the US plug. (Thuogh I'm not sure if you can do them as permanent installs in the US... they're still neat for lab equipment)
edit: you should have respect for anything about 50V or so, after that point it can be quite dangerous. 120V is way above where i start using safety equipment.
No GFCI will save you on the secondary side of a transformer. My point is: working with standard 120/230V requires just following codes and guidelines. Tinkering with HV in non-standard setups requires understanding of how and why.
There is always potential for ground. The secondary side of the transformer has a much more dangerous potential: a unknown fault puts something at ground, and then thinking there is no ground you touch ground and the other side. Normally that would be harmless, but since there is an unknown ground it is deadly.
In many ways mains power would be safer if there was no ground, and a lot cheaper. However a couple failure cases are even worse without ground, and they are the type you only find out about when somebody dies. Thus we put ground in houses.
edit: also, in case you put two hands in a device, it won't matter much if it's on the other side of an isolating transformer or not. The GFCI might not trigger in this case.
> also, in case you put two hands in a device, it won't matter much if it's on the other side of an isolating transformer or not. The GFCI might not trigger in this case.
You almost killed yourself over a dozen times, but your advice to the inexperienced person above is a few safety tips, not much better “just be careful”.
The only reasonable advice is to not even touch the stuff unless you’re an expert with plenty of training. Especially since there’s nothing to be gained except satisfying a useless curiosity.
I almost killed myself over a dozen times and yet I'm still standing. If you don't take advice about handling high electricity from someone who experienced it a couple times, from whom do you take it then?
I will gladly touch stuff to change my lightbulb, I'm not gonna consult an expert for that. Or swapping fuses and other, similarly, simple procedures that are reasonably safe if you are "just careful".
Useless curiosity is where the majority of human progress comes from.
those are dangerous too--they output high frequency AC which can cause burns. with a thumb-sized CCFL supply i once burned a tiny hole in my finger. never bled a drop since the current cauterized it but it hurt for a week.
The fact that you can burn yourself with soldering iron is mostly obvious. That CCFL inverter can easily burn hole through you finger without you noticing it until it's too late is not that obvious.
In both cases with overwhelming probability. But there is difference: you have to be extraordinary clumsy to get life-threatening burn from soldering iron, but you can get life threatening burn from HF HV very easily.
This thread serves as a great example of why it's not a good idea to plaster dire warning labels all over everything on the planet "just to be safe." When everything is dangerous, nothing is.
you've got to be very careful and never use HV without treating it with respect. i use the one hand rule extensively, keeping my left hand firmly tucked behind my back while the supply is turned on.
Dunno about the Monoscope in particular, but a lot of tubes need several milliamps, which is quite a bit more painful than a good carpet shock. Not fatal, but not something I'm eager to experience. And I'm the guy who used a one-henry inductor to administer electric shocks to people in high school — including, repeatedly, myself.
The issue is that it's a lot easier to build or buy a kilovolt power supply that doesn't have adequate current limiting than one that does. And even one that has it in theory may not have it in practice — that big capacitor across the output? Make sure it isn't just wired directly to the output terminals, because its ESR sure as hell isn't going to be adequate current limiting.
So I think it's reasonable to be wary of kilovolt circuits. Dying is easy, but you only get to try it once.
Lightedman's dead comment, in response to my, "The issue is that it's a lot easier to build or buy a kilovolt power supply that doesn't have adequate current limiting than one that does," said, "You can buy Van de Graaf generators all day long. Quarter-million volts minimum for $200."
While van de Graaff generators are indeed relatively affordable, and indeed some even cost less than US$200, a working microwave oven costs US$60, and a broken one can be had for under US$10. Furthermore, a safe van de Graaff generator is not actually capable of supplying enough current at the high voltage to operate a vacuum tube, while a deadly microwave-oven transformer is; and there are orders of magnitude more microwave ovens available.
A KV from a half decent supply will throw you across the room in a bad case and give you a nasty burn in a slightly better one. Don't underestimate the power of a HV supply, especially one that has some nice stabilization and caps.
my power supply is capable of 1KV at 10mA. it's the one piece of lab equipment i own that genuinely scares me. all this stuff has to be treated with respect. still though it's good for cool experiments with monoscope tubes. :-)
I can't find the reference now, but I remember reading about some early computer engineers (possibly on Whirlind?) contemplating the use of a microwave retransmitting station as a form of memory, essentially creating a "mercury" delay line with microwaves in the atmosphere.
This is how desperate people were for anything even remotely affordable which could be tortured to behave somewhat like memory! No wonder Intel made a bundle when they started offering chip memory. The stuff it was replacing was just totally inadequate for the purposes many people wanted to put it to.