This video covers a particular type of transistor known as the Bipolar Junction Transistors (BJTs). These are more commonly used in analog applications like amplification and signal processing rather than digital logic (though they can be used in specialized digital logic circuits).
Today, field effect transistors (FETs) reign supreme for most IC applications such as CPUs and digital logic as they're more scalable and efficient than BJTs and have a very different structural design.
> In fact, they invented a new part that has the "input" gate of an FET and the Collector-Emitter "output" of a BJT!
IGBTs are far from being a new invention.
And if we are speaking strictly, very high currents aren't switched with higher net efficiency by BJT than FET.
The reason SCR type devices are used for the kiloamperes range switches is due to them being the only switches which can mechanically/thermally handle so much current.
But for high voltages, bipolars will indeed go higher than FETs.
In "on" mode, FETs appear as a small resistance R_ds_on, but BJTs act like a mostly-constant voltage drop V_ce_sat.
So for any BJT and FET we compare, passing a current less than V_ce_sat/R_ds_on is more efficient on the FET, and for greater currents the BJT is more efficient.
Depending on voltage. Extremely low RDs ON FETs are there. The real world choice would depend on whether you just need a constantly open switch, or high frequency switching for power conversion.
FETs and BJTs are both common for analog. Discrete analog tends to prefer BJTs, which (among other features) have much higher transconductance, which you can never get enough of. FETs offer ultra-high input impedance, better performance at low power (though BJTs ain't all bad there), and the big one... they're built on CMOS processes. That means you can integrate your analog stuff with a giant pile of digital logic, which is a tremendously useful thing to do. (But the analog section alone often would be better if it were on a bipolar process.) Op-amps are split evenly between both types. Okay, evenly-ish.
> I can’t explain that attraction in terms of anything else that’s familiar to you. For example, if we said the magnets attract like rubber bands, I would be cheating you. Because they’re not connected by rubber bands. I’d soon be in trouble. And secondly, if you were curious enough, you’d ask me why rubber bands tend to pull back together again, and I would end up explaining that in terms of electrical forces, which are the very things that I’m trying to use the rubber bands to explain. So I have cheated very badly, you see. So I am not going to be able to give you an answer to why magnets attract each other except to tell you that they do. And to tell you that that’s one of the elements in the world – there are electrical forces, magnetic forces, gravitational forces, and others, and those are some of the parts. If you were a student, I could go further. I could tell you that the magnetic forces are related to the electrical forces very intimately, that the relationship between the gravity forces and electrical forces remains unknown, and so on. But I really can’t do a good job, any job, of explaining magnetic force in terms of something else you’re more familiar with, because I don’t understand it in terms of anything else that you’re more familiar with.
He also explains seeing, heat, electro magnetism, elasticity and mirrors among other things.
His academic lectures are just as good but too long and hard for laymen to follow.
- Every time I switch on an electrical device I hear Feynman say 'Zzzzinggg' and I see the copper bars jiggling across town.
- Every time I see a cup of hot liquid I hear Feynman say "jiggling atoms"
- Watch his hands and fingers telling the more accurate science story, simulating the electrons and atoms.
- I would say that this the most important video to see for any human being on the planet. De second most important thing would be half of Alan Kay's lectures https://youtu.be/FvmTSpJU-Xc?t=2067
Great video's to watch with your kids! (from 3-4 years and older).
Electricity is the behavior of electric charge, and electric charge is an intrinsic property of some particles (it'd be like asking what mass is, without falling back on what mass does).
Electric charge has a magnitude (arbitrarily, we have labeled the axis such that protons are positively charged and electrons are negatively charged, but it's symmetric such that if everything in the universe swapped positive to negative nothing would change). Particles with opposite charges attract, particles with same charges repel. When a source of positive charge and a source of negative charge are separated, there is potential energy. We simplify our math a bit by factoring out the charge that would be moving from the positive to the negative out of our potential energy calculation to get "electric potential" or simply "voltage". When there is a path that charge can flow through between a high potential and low potential, it creates a flow of charge between positive and negative sources of charge that we call "current". As the charge flows, the potential energy decreases, meaning other energy has to be released. The most common way this happens is simply by creating heat. Some materials allow charge to flow through them more easily than others: the ratio of the potential (voltage) across a component to the rate of electric charge that flows through it as a result (current) is approximately constant for most things, and we call that resistance. This gives us Ohms law : V = IR
The initial analogy in this video is totally useless.
In general, though, the water analogy is actually quite good. The electron gas in conductors behaves a lot like a normal fluid. It easily covers linear components like resistors, inductors, and capacitors. Nonlinear components like transistors and diodes require more extensive analogy that is no longer very accurate.
Did you invent this? It's fantastic. Lots of people misunderstand electricity as just "tiny little balls colliding with other balls", or think that the electrons themselves must be zooming around the circuit, rather than the wave they participate in.
"Electron gas" sounds right to my ears for these simple analogies. Just like sound oscillates, but still moves from speaker to ear, so too does AC current oscillate, but the energy has a single direction.
The nearly free electron gas model is the standard introduction to explaining conduction in solids. In other words, no he didn’t invent it. Interestingly enough, the first model for explaining conduction was naively assumed to be a simple gas model. After quantum mechanics was introduced into the modeling, it was discovered that while not quite correct, it was not very far off either.
You'll end up teetering between the math, which completely breaks down intuition about electricity, and analogies, which keep your understanding a bit grounded. Veritasium's videos on electricity were an attempt to de-metaphorize electricity, but they just yielded more questions.
Electricity is almost literally tubing filled with marbles that all violently repel one another, with the tube violently attracting the marbles.
A loop of marble and tube is static: the marbles all push against each other and are drawn to the tube.
If you pump some marbles up the tube, they’ll clump into other marbles, which will want to repel. They’ll scurry away, pushing the next barbles, and so on. That pump is a voltage, and the movement is current.
A resistor is a sludge that the marbles pass through. They can, but only if they’re being pushed by a voltage.
After some thought, here is my stab at explaining what electricity "actually" is, at least in a way that works well for classical modelling. (There is a necessary amount of handwaving and inaccuracies by omission, but I'm trying to keep it at an ELI5 level).
All matter is composed of squintillions of tiny things called "atoms", which are composed of a core having a certain positive integer (its atomic number) that represents its charge and a certain number of tinier things whizzing around the core called electrons. If the number of electrons whizzing around an atom's core is not the same as the atomic number, the atom gets mad and will either try to fob excess electrons off to surrounding atoms or steal them from surrounding atoms (depending on if it has more or fewer electrons than ideal).
Thus, if you decide to pick a few atoms and kick electrons out of them, it starts a chain reaction of electron motion that is observable on the macroscopic scale. These chain reaction is an electron current, which is measured as going in the opposite direction of the way electrons flow because Benjamin Franklin guessed wrong. The number of electrons flowing in unit time is the current (as a measurement), measured in amps (approximately 10 million trillion electrons per second).
It also turns out that you can vary how ferociously these electrons are hitting atoms: the voltage (amount of energy per ~10 million trillion electrons). Different materials are better or worse at absorbing (or resisting, if you will) the ferocity of electrons, and this is the resistance. If the resistance is high enough, it basically becomes impossible for the electrons to flow, and you get an insulator.
But how do you get the electrons to start moving in the first place? The easiest to explain involves chemical reactions: sometimes, atoms decide they'd rather be in a different orientation, and in the process of moving to that orientation, they need to emit some electrons first. With some cleverness, you can set things up so that electrons have to go around the "long way" (through a wire), and something that is set up to be able to do this is more commonly known as a "battery." The other main way you can do it is by creating changing magnetic fields, which are kind of created by changing electric currents, and explaining this in more detail basically requires throwing away everything I've described, starting from scratch with the actual physics, and still coming to the realization that mathematical equations are not satisfactory answers to the question "what is it."
Right at 8:20 is a mistake. The electrons are not "pulled" across the reverse-biased collector/base junction. The technical term for electrons being "pulled" by an electric field is "drift", but that's not what's happening.
It's the other mechanism for charge carrier movement that causes the current through the base/collector depletion region: diffusion. This is really just thermal diffusion- the injected electrons will diffuse into an empty space in the same way that air quickly diffuses into an evacuated container. Once electrons have diffused into the collector / base depletion region, drift due to the collector potential "pulls" them the rest of the way.
Anyway, BJTs are partially a thermal device, since their operation depends on diffusion!
As always these things tell us how transistors work, which is fine and all that, but that's not enough to tell anyone how to use one which is almost universally done incorrectly even in some high profile textbooks. They are a little more complicated than this video can muster. In fact I'd argue that it's probably worth skipping this and delving into the mathematics properly.
Wes Hayward W7ZOI explains this side of things rather well in his book Experimental Methods in RF design. Some of the content is duplicated here discussing bipolar transistor feedback amplifier designs: http://w7zoi.net/transistor_models_and_the_fba.pdf
Hayward's "more detailed model by Ebers and Moll" features prominently in AoE. So don't worry about them.
The cardinal sin of teaching BJTs is to say that they're current-controlled amplifiers: little current in, big current out. They are not, not really. They're voltage-to-current converters: little voltage wiggle at Vbe, big current wiggle at Ic. This is really not in dispute and anyone who tries to argue for the hFE/beta model as really true is wrong. Certainly, it is a useful approximation and often all you need, but it is not how a BJT really works and so don't pretend it is.
(The link between the two is that the base is not a high impedance input. That is pretty uncommon, so it's not surprising that it's kind of confusing! Because it's a low-impedance input, small changes in applied Vbe correlate directly with small changes in current, so it looks current-controlled. But careful measurements, as I believe are plotted in AoE, reveal the primacy of the Vbe-controlled model.)
I like this one. I like especially that it stops to explain the junction bias volage and why it becomes the "diode drop" (even if it doesn't get into the implications), which is IMHO something most new people struggle with when understanding analog circuit behavior. Likewise it's careful to explain that the BJT transistor behavior is due to careful tuning of dopant levels, something that took me a long time to grok (FETs are easier to understand).
The one thing I do wish it would clarify though is that this is not a FET, and it's not discussing the kind of circuits used in digital logic. These are the transistors you se in analog amplifiers and very old computers.
As I understand it, and I'm not a quantum physicist so I may be wrong, electrons lose energy when they go from the emitter into the base, because they're moving from N material where their energy is high into P material where it's low, which is why you have that 0.7-volt drop. But, by the same token, electrons that move from the base into the collector gain energy. It's just that they're regaining the energy they lost when they crossed into the base in the first place.
That is, the built-in potentials of the two diode junctions are in opposite directions, so in series combination those voltages mostly cancel out.
Since the base-emitter current heats up the junction, but the collector-emitter current doesn't (or rather it heats it up to the tune of 0.2 milliwatts per milliamp rather than 0.7) it stands to reason that this collector-base transition must be sucking heat out of the junction when the electrons cross it. And I think this is actually how thermoelectric generators (TEGs) and Peltier coolers made out of semiconductors like bismuth telluride work.
One problem with this account is that I think the band-bending diagram of a p–n junction has the electrons in the P material at a higher energy level, although I guess the relationship of the Fermi levels is opposite?
So I am a (former) quantum physicist (though the wrong kind, an experimental–elementary-particles type rather than a theoretical–condensed-matter type) and am now an electrical engineer, and I can say with confidence, bipolar junction transistors are hard to really understand. I don't think I've ever seen a single explanation that goes through it all in an understandable way.
In particular, factors that have to be considered are the built-in potentials of the space charge layers (aka depletion zones), and the different doping profile of the collector versus emitter. The doping is very important, and is often neglected. But without it, transistors would basically operate equally well in forward mode as in reverse, and they definitely don't do that. These two things have a big effect on the Fermi/band-bend diagram and probably help clear up your last point.
I barely feel like I actually understand these things on a good day, and I pay the bills with them....
QM isn't too bad. The basic ideas are alien but straightforward enough. It's hard to extend small examples to larger systems, though, and that plus the innate weirdness really messes with your intuition.
BJTs are complicated devices built from multiple simple-enough effects all happening at the same time. Each piece isn't too bad, but you've got to track them all and combine them into an overall function.
QFT (quantum field theory, such as QED) is basically all the bad parts of both of those: the basics aren't too bad, but it's difficult to scale, there's a lot going on, and it's hard to develop intuition. QCD then makes the basic calculations intractable, which is why it's still poorly understood to this day (in my opinion).
So, yeah, in conclusion, complexity is complex? Not so profound, I guess.
Yet Shockley was able to make quantitative predictions about, say, solar cell performance that are still considered useful today. And I don't think kT shows up en the Ebers–Moll model just by chance.
Transistors use a small current to control a larger current. You can think of this as one person tapping another on the shoulder of another person. But what good is that? Not much, on its own.
It is only possible to use this property to store data because you can build a circuit called a flip flop https://en.wikipedia.org/wiki/Flip-flop_(electronics) that enables 2 pulses of current to translate into 1 pulse of current.
That may not seem like much, but it enables everything happening in modern digital technology.
So how does it work?
Let's say you have 5 people in front of you, in a line. Everyone in line is directed to tap once on the next person for every 2 taps on their shoulder. So, for you, you feel 2 taps, and then you tap once. This is what a flip flop does.
Now, you get an additional instruction. When you are tapping with your left hand, you raise your right hand and when you are not tapping with your left hand, you lower your right hand.
Now, for every tap of the first person, the next person taps half as often, and the next person in line taps half as often, etc.
If everyone keeps time, looking at the group of people, you will see raised arms and lowered arms. The raised arms are 1s and the people without raised arms are 0s. Now you are counting in binary.
In a digital clock, this process is used to translate a quartz crystal's pulses into counting seconds and time. The same process can also be used to store numbers. For example, let's say I have 25 flip flops in series. Now, I can store a number as large as 2^25 in memory.
Nice. But that's a BJT in saturation or cutoff. Which is only part of the story. The magic of a BJT is in quiescent region, which is like using a crowbar to lift something really heavy: the lever amplifies your force the same way a little base current amplifies the Emitter/Collector current.
Going further, a mosfet doesn't use current to switch, it uses electric field. I like to say:
The charge that builds up on the gate from the applied voltage causes a depeletion(enhancement) region which is like Moses parting the red sea, so that holes/electrons can move through it.
You're talking about the role that transistors play as switches in a digital logic circuit. The video is about the physical operation of, in particular, bipolar junction transistors, which are more commonly used as amplifiers.
To me even a diode is slightly mysterious, or at least the "electrons and holes" explanation is. It kind of skips what happens at the metal-silicon (n-doped) junction and why this doesn't work like a diode even though "holes meet electrons".
My mistake, make that p-doped. Anyway, from the pure "electrons and holes" view the diode is actually electrons(metal), electrons (silicon), holes(silicon), electrons(metal).
So electrons-holes-electrons. No explanation for the asymmetric behaviour.
Perhaps you are thinking of the Schottky barrier. (Curioisly though, in the Schottky diode they typically do use the n-doped silicon, although the p-type also forms the Schottky barrier with metals.)
He uses water, which I think tries to touch on the utility of a transistor and semiconductors. The video should have ended or been followed up with the reason any of what he described matters - how transistors are used in computing.
Interesting, I always thought it didn’t make a difference how much current was applied to the emitter, that it was basically a binary switch above some very small threshold.
https://nandgame.com/ will guide you through using NAND gates to create one computer. (There's lots more to learn about how to create other kinds of computers, and other ways of designing them, but walking through that one example should at least dispel the illusion of magic.)
The last time I did it, it took me two hours, but I think that if you are coming at it without any previous knowledge of digital logic it should be tractable within about 40 hours.
Then the only remaining things to understand are:
① how can you create NAND gates with transistors?
② why are transistors better for this than, for example, relays, pneumatic valves, or neon tubes, all of which would also clearly work?
The answer to ① is actually pretty simple; the best explanation I've found is in The Art of Electronics, though Ken Shirriff's blog has a lot of great stuff on it too. And Falstad's circuit.js has some good simulations, like https://tinyurl.com/2eegeyxj and https://tinyurl.com/ybpo5ls2, which I think are nicely complementary to the theory.
I've never seen a good explanation of ②, but I think the main answer is that electrons are about ten thousand times lighter than atoms, so with the same energy, they move about 100 times as fast. The result of that is that switches that work by moving around electrons can switch a great deal faster than switches that work by moving around atoms. Neon tubes also work by moving around electrons, but they're slower than transistors for a different reason: it takes the gas a long time to thoroughly deionize.
That's true for information transmission, like what happens in wires, but as I understand it, switching in semiconductor diodes, transistors, and vacuum tubes actually requires the electrons to move in and out of depletion zones (or the vacuum). Switching in relays or pneumatic valves requires moving solid assemblages of atoms around instead. That's why it's slower, even though the drift velocity in the wires connected to a relay is the same as in the wires connected to a transistor.
Good points! It's true that switching times in transistors is a thing! There's even something called gate delay! I remember working on a system where 1 ns delays were available in a chain of transistor gates on the silicon die. You would do some tests and adjust the delay +/- by adding or removing a gate (selectable via some logic) and once you got the good results in your calibration you'd lock in the delay. This allowed for some synchronous signals when there was variation from different foundry processes and a variety of PCB layout distances between chips.
Today, field effect transistors (FETs) reign supreme for most IC applications such as CPUs and digital logic as they're more scalable and efficient than BJTs and have a very different structural design.