I remember RPI had something like this in the 1990s. I can't remember what it was called though. But I do remember how impressed everyone was: if you call this phone number, they can answer ANY question!
I believe those of us who were around from then to now experienced peak information. We went from having to look things up in libraries to being able to find anything with a Google search. We're on the downward slope now. Business models have changed, spamvertisers are winning the war against search, and generative AI slop is already the dominant source of "content", ensuring the genie can never be put back. This is not an anti-AI rant, it is just an acknowledgement that like so many things, we were foolish to think that access to information was just going to keep getting better. I did not expect that in my lifetime, I would see the best it was ever going to be.
Maybe in the future, calling a trained human for help will be the only way to sort through the mountain of infogarbage to find something. Or we'll have to go back to the library.
I remember “learning to use the library” was a thing growing up in the 90’s. It was funny because we’d come far along enough with the internet stuff that even the adults teaching this basically knew it was going to be a very niche skill. But still, something that every educated person was supposed to know how to do.
That's still an important skill! You might think that everything that can be digitized has been and is easily available online, but that's not the case. Especially so for more obscure books and publications that are mostly relevant to your local area.
When I moved to my current town and visited their library, I very quickly found some books written about the local area for which not much info existed online. It's a great way to spend some time if you're into that kind of thing!
I think it is the permanent end of American economic/political/cultural dominance, which is a long-term gain for the world, but it's going to put the hurt on a lot of people (myself included). I am not quite altruistic enough to celebrate being sacrificed in this way, but I can see that when the future history books are written, they may look back at this as the end of a blight.
Ran it and it crapped out with a huge backtrace. I spotted `./build_bundled.sh: line 21: cmake: command not found` in it, so I guessed I needed cmake installed. `brew install cmake` and try again. Then it crapped out with `Compatibility with CMake < 3.5 has been removed from CMake.`. Then I give up.
This is typical of what happens any time I try to run something written in Python. It may be easier than setting up an NVIDIA GPU, but that's a low bar.
Python problems exist on all platforms. It's just that most people using Python have figured out their 'happy path' workarounds in the past and keep using them.
Python is awesome in many ways, one of my favourite languages, but unless you are happy with venv manipulation (or live in Conda), it's often a nightmare that ends up worse than DLL-hell.
Python is in a category of things you can't just use without being an expert in the minutiae. This is unfortunate because there are a lot of people who are not Python developers who would like to run programs which happen to be written in Python.
Python is by no means alone in this or particularly egregious. Having been a heavy Perl developer in the 2000s, I was part of the problem. I didn't understand why other people had so much trouble doing things that seemed simple to me, because I was eating, breathing, and sleeping Perl. I knew how to prolong the intervals between breaking my installation, and how to troubleshoot and repair it, but there was no reason why anyone who wanted to deploy, or even develop on, my code base should have needed that encyclopedic knowledge.
This is why, for all their faults, I count containers as the biggest revolution in the software industry, at least for us "backend" folks.
I'm surprised this Jon Richardson bit hasn't been posted already. It's an incredible piece of comedy, even moreso given that it's about loading the dishwasher.
The C64 starts up straight into BASIC from ROM. Unlike some other contemporary computers, it doesn't attempt to boot from any external devices (except the cartridge port). There isn't really a DOS in the usual sense. Apart from simple support for loading and saving programs, and a very basic channel I/O facility, everything else is handled by the firmware in the disk drive, which has its own 6502 and operating system.
For example, there's no command for getting a directory listing. You type `LOAD "$",8` (8 being the disk drive), and the drive pretends there's a BASIC program called `$` that happens to contain a directory listing you can then look at with `LIST`. (https://en.wikipedia.org/wiki/Commodore_DOS#/media/File:Comm...)
By default, LOAD loads tokenized BASIC programs, but if you add an extra `,1` to the command, the file can contain arbitrary data starting at any location in memory. You could use this to load a machine language program and then run it with `SYS <location>`. Clever programmers figured out they could skip this step by having their file overwrite a vector that gets called after the load completes and jump right into their code, resulting in every Commodore kid having being able to type `LOAD"*",8,1` on autopilot.
I got distracted by other trivia (I grew up with this computer and it was hugely influential and I will adore it forever) from getting to the actual point: The C64 uses a variant of the 6502, the 6510. It has a special register for swapping out any combination of the three ROMs (BASIC, KERNAL (sic), and the character ROM) plus I/O registers that overlay portions of the 64 address space. If your code doesn't use those, you can access the RAM they are hiding by turning them off.
On my ATARI there was no DOS too. When you start the 65XE you can hold (iirc) START to start loading an application from the cassette recorder, but it was recommended to hold both (again iirc) START and OPTION to bypass BASIC, because BASIC interpreter being held in the memory, somehow interfered with bigger games (I think this was due to memory, but I'd like to learn from someone who know). I myself got into this trouble sometimes. Also you could have a CARTDRIGE with DOS-like Turbo management which allowed to scan cassette for given filename with binary application, but no one used this because it would take crazy long. I never had chance to use floppy disk, but I think it was behaving in a similar way (you had to have a floppy with DOS and hold START when powering the computer to load it), but at that time the FDD drives for atari were horryfyingly expensive (they had the same CPU 6502, and even there were some demoes which used this CPU as a coprocessor), so I stayed with a cassete reader with TURBO.
Of course games were also sold on CARTDRIGEs and this was the fastest way to play, but it wasn't popular in my country.
The original Atari 400/800 included BASIC on a ROM cartridge.
To use BASIC, you plugged the BASIC cartridge into the system and powered up.
To boot something else (games...., from either cassette or disk) you first removed the cartridge, then powered up.
With the XE series, BASIC was built in to the console, so the "magic keys" were needed to tell the hardware to virtually "unplug" the BASIC ROM before it tried booting from any connected devices.
Yes, Option would disable Basic on boot.
The first Ataris (400 and 800) came with a basic module that you had to put in and then start the computer to use Basic - or likewise Assembler. The module would then use certain parts of the precious 64 KB Ram - actually, much less because the OS in ROM would write itself into RAM on startup and take about 20KB away. So a program or game had about 40 KB space to use. Basic would take some more away. Which wasn‘t a problem on the 800, you would either plug the module in and use Basic but you wouldn‘t if you wanted to load a game. But with the XL, you needed a way to disable the automatic Basic load at boot time, or many games could not use all of the memory they needed. Hence, the Option-option at startup.
At least I remember it this way, but I only had an XL, not the older ones, and now I remember that the 800 had only 48KB of RAM, so it was probably more complicated than that!
> much less because the OS in ROM would write itself into RAM on startup and take about 20KB away.
RAM shadowing of the ROM did not exist in the Atari's (at least not in the original 400/800 models). The ROM's simply were physically connected to actually "be" the top 16KB of the 6502's 64k max address space. The CPU executed the ROM code by directly reading the code from the ROM chips.
Which is also the reason the original 400/800 models were limited to 48k max RAM. 16k of the address space was already used by the ROMs.
I was thinking of doing a kickstarter a while back for something similar. One surprising thing that kept me from proceeding (or at least, a thing I used as an excuse to not do the project) was that the guy who "owns" the pomodoro timer trademark is a total jerk about it.
I always find the Italian kitchen a fascinating contrast between an appreciation for artisanal handmade food and the love of shiny stainless steel gadgets.
Having read both of them (well, I had the dragon book as a compiler course textbook, couldn't really read it all, donno who can) I think that's not a fair comparison.
AoE is extremely practical. I think the debate here is what exactly are "fundamentals" for electronics. I read AoE with high school physics and some hands on tinkering (mostly with exposure to software in embedded systems) as my background. At that point in my life I found it readable and enjoyable. It will help you get to the next level. Probably skipped some sections that weren't of interest though. I probably built my first electronic circuit in elementary school (some lights, switches, battery, etc.). If you just have no clue of anything electronics then yes, this is not the book for you. But it still is "electronics fundamentals" despite that.
I already knew something by the time I read it so that must have helped. I guess you do need a certain maturity level (in the subject) to get started but once you have it (maybe from somewhere else) I think it's great.
It reminds me of my first time trying to learn assembly language when I was in my early teens. I just could not make any sense of it. I knew a little bit of PASCAL and BASIC at the time and that was just alien territory. When I came back a few years later after some exposure then it all came together.
I just spent several long days implementing my first modern CSS responsive web design, learning flexboxes and grids and what-can-I-use along the way. I currently have three different browser windows and two device emulators open across two monitors. So what I'm saying is, I have an appreciation for how this app could probably have saved me a lot of trouble if I'd known about it last week.
As a purpose-built tool, it improves on the experience of having multiple windows open. While the headline feature is keeping them all in sync, for my particular app, that's not as big a deal (though it's pretty cool that it will even do that for stuff like tweaking CSS styles in the inspector).
I've been playing around with it for a few minutes and I think what I'm really appreciating is that it's filled with dev tools of all kinds and it's really optimized for working on web sites rather than browsing. It can automatically open panes based on CSS breakpoints, and it has presets for many devices. Some of it is things I had in Chrome, but better, like rulers and guides and grids. Even the way screenshots are implemented shows they put thought into saving time and hassle over a thing you could of course do before with a few more steps. And it's not all related to layout: it shows meta tags and icons and previews for social media sharing.
Anyway, it's pretty cool IMO and I'll probably end up buying it if I keep working on web apps. The only downside I've noticed is that it feels a little sluggish, even when not heavily loaded down (e.g. just 3 panes). I'm using this monster M2 Ultra Mac Studio so it's a bit unusual for a browser to lag.
polypane lets you see everything happen simultaneously, which is way less tedious than just like refreshing and changing your viewport and whotnot for however many permutations you're testing, is my understanding. as a general example if you make an inline CSS change in a browser tab that generally only affects the one you're currently in.
I believe those of us who were around from then to now experienced peak information. We went from having to look things up in libraries to being able to find anything with a Google search. We're on the downward slope now. Business models have changed, spamvertisers are winning the war against search, and generative AI slop is already the dominant source of "content", ensuring the genie can never be put back. This is not an anti-AI rant, it is just an acknowledgement that like so many things, we were foolish to think that access to information was just going to keep getting better. I did not expect that in my lifetime, I would see the best it was ever going to be.
Maybe in the future, calling a trained human for help will be the only way to sort through the mountain of infogarbage to find something. Or we'll have to go back to the library.