Hacker News new | past | comments | ask | show | jobs | submit login
“I wrote FAT on an airplane, for heaven's sake” (msdn.com)
306 points by Hrundi on Oct 10, 2013 | hide | past | favorite | 115 comments



I had a manager that would repeatedly use this type of remark. We had this particular application that on one could touch without breaking something. It was always a mess. He would often remark "I wrote the whole thing over a weekend, why can't you guys make a tiny little change without breaking it?" One day I finally got fed up with him and replied "Because it's the quality of work you'd expect from an entire application written over a weekend."


On the opposite site of the coin, I was a professional developer 25 years ago (and still keep my hand in as a hobby), and do regularly struggle to understand why things take as long as they do these days. We were quoted a day for something that I said could be written in about half an hour - their quote included nothing more than coding and unit testing it.

So I was challenged to prove it. Which I did (or rather I did it in 10 minutes). When their code came back, a day later, it didn't actually work. When we pointed that out to them, they came back another day later with code that looked almost identical to the one I'd written in 10 minutes.

tl;dr. Sometimes managers don't realise the complexity of modern software, but sometimes modern developers are actually just plain slow.


I have been developing software for 20 years and I am currently replacing some applications that were written in the last five years. They use OOP, patterns and the latest tools. Often I wade through dozens of lines of codes trying to find the meat of what they are trying to do. I usually find easier and shorter ways to implement the same thing just by better design and avoiding repetition. I don't think programming is just about the tools, I think it is about structure and organization.

Every pattern, layer, feature or tool that you introduce in a project, makes it more complex, so you really have to use good judgment when you decide what to add.


>Every pattern, layer, feature or tool that you introduce in a project, makes it more complex,

That really shouldn't be true.

A pattern is simply a commonly used way of solving a common problem. If you're picking one that make it more complex, then you've picked the wrong pattern. The reason that I could do the example I talked about in my original post in 10 minutes was because it was a bog-standard design pattern designed to solve exactly the issue that the software needed. Amongst the many reasons why it took the developer 2 days was the fact that they had to pretty much reinvent this pattern from first principles.

Equally, if you're introducing a tool that doesn't simplify something that you'd need to do/build manually then you shouldn't be introducing that tool.


Times have changed considerably. When I started in the industry 20+ years ago our tools were a compiler, debugger, and editor. Any library not provided by the compiler we wrote simply because purchasing libraries was very expensive. We also weren't afraid to code a solution specific to the problem and not be overly concerned with abstracting every library for potential use with any future application. Most work went into the server code, which housed the application logic and coordinated with the database; graphic UI's were a luxury.

Nowadays a web application, for example, is tiered more. Different frameworks are encouraged for the different tiers: Bootstrap, Spring, Hibernate, etc. Each one is its own ecosystem and is built on top of other libraries. It's very common to make web service calls outside your WAN. Quickly you find out that "standards" have different interpretations by different library authors.

UI's are no longer an afterthought. They affect how successful your application is. (My observation is that a well-designed UI can cut down on user errors and training by two thirds over a merely functional UI.)

I'm keeping the example simple by not mentioning necessary middle-tier components that we didn't use 20+ years ago. We also didn't worry about clustered environments, asynchronicity, or concurrency.

Not knowing the application you needed or how the analysis was done by the coding team, it's hard to say if some of their "slowness" was getting to know the problem AND coming to understand how extensible, performant, and reliable you wanted it. My own approach is usually to solve the "happy path" first and then start surrounding it with "what if's" - e.g. what if a null is passed into the function, etc. Over time I refactor and build in reliability and extensibility. The coding team you referred to may have used a different approach in which they tried abstracting use-cases and building an error handling model before solving the "happy path".

Your "tl;dr" is spot on. But I'd like to raise a cautionary flag about judging modern development through a 25 yo lens. The game has changed.


I'm still heavily involved in IT. I know the game has changed. As well as the complexity that's come in to it, there's an awful lot of complexity that's disappeared. Those necessary middle-tier components and UI frameworks have to be used, but they no longer have to be built from scratch.

>We also didn't worry about clustered environments, asynchronicity, or concurrency.

Clustered environments, probably not. But asynchronicity and concurrency were the bane of my life. Writing comms software back in the day involved having to hand-craft both the interrupt-driven reading of data from the i/o port and the storage and tracking of that data in a memory-constrained queue, synchronised with displaying that data on the screen. And the windowed UI had to be hand-crafted as well. Error handling was no more of an afterthought then than it is now - and you couldn't roll out a patch for a minor defect without manually copying 500 floppy disks and posting them to clients.

I understand why some bits of development take a long time, but the reality is that 90+% of the development work that our place does these days is what an ex-manager used to refer to as "bricklaying" - dull and repetitive work that involves pretty much zero thought to implement. Extract file X (using the language's built in drag and drop file-extract wizard), sort it by date (using the language's built-in sort module), split into two separate files (using the language's built-in file split module) and load into database Y (using the language's built-in database load module).

And even with all of these tools, it still takes 10 times longer for people to develop these kinds of thing than it did when we were writing all of this from scratch. It's not because of complexity of coding, of environments, or of frameworks. The problem is that much of the IT industry has replaced skill and knowledge with process, contracts, documentation and disinterested cheap labour.


I once worked with a founder who would always pull that kind of garbage: "I wrote [simple software with no dependencies or integration requirements] in [N] days! Why is it taking you guys [M] months to write [complex software relying on several 1st and 3rd party libraries, a component that needs to work within a large, old legacy system]?

Good on the develop manager!


There was once a programmer who was attached to the court of the warlord of Wu. The warlord asked the programmer: “Which is easier to design: an accounting package or an operating system?”

“An operating system,” replied the programmer.

The warlord uttered an exclamation of disbelief.

“Surely an accounting package is trivial next to the complexity of an operating system,” he said.

“Not so,” said the programmer, “when designing an accounting package, the programmer operates as a mediator between people having different ideas: how it must operate, how its reports must appear, and how it must conform to tax laws.

By contrast, an operating system is not limited by outward appearances. When designing an operating system, the programmer seeks the simplest harmony between machine and ideas. This is why an operating system is easier to design.”

The warlord of Wu nodded and smiled. “That is all good and well,” he said, “but which is easier to debug?”

The programmer made no reply.

The Tao of Programming, Geoffrey James


I believe the developers working on Windows Vista, Copland, Taligent and GNU/Hurd would like to have a word with Mr. James.


Heh, reminds me of a boss that criticised my work a few months back: Boss: "Hey antimagic, why haven't you finished that module yet?" Me: because the code it's interfacing to is a big ball of spaghetti (paraphrasing, because it was the boss in question's code - so I was much more diplomatic) Boss: "You need to learn to be able to work with other people's code better - I can get in and modify your code easily" Me: <stare at boss waiting for penny to drop> ... 30 seconds later after no reaction... Me: Yes, I do write fairly clean code. Thank you for noticing.


I love it when someone's attempt at insulting you and complementing themselves results in the opposite of their intent. But is it more satisfying when they realize it or are unaware?


Also I've generally found it's easier to develop starting with a blank directory tree (no code), than to inherit a legacy codebase, have to make the sometimes grueling time/energy/focus/trial-and-error investment needed to come up to speed on it, understanding-wise, at the fine-grained level of detail you need to code confidently, then, figure out how to make a positive change that doesn't make some other thing worse. (And it's harder if no automated tests or documented manual test plan.) I call it OPC for Other People's Code. One of the anti-patterns of software engineering. It's distinct from NIH (Not Invented Here), which is another anti-pattern.


Of course it's always easier. The question is, can the business afford to wait while you greenfield another app? Usually it can't. That's why refactoring.

First step is fixing the development environment / build process and getting a staging server up. It will inevitably be broken / nonexistent, with frequent edits directly to production necessary. The last guy will have internalized a great deal of operational workarounds that you'll need to rediscover then codify into the app.

Next you write tests. There will be none. Once you have a workflow that is decent, you can start to identify the worst offenders. All the while, you'll be having to change the codebase to meet project requirements, this will give you a good idea of where the really bad shit is. Unit test all of it, and if you're feeling froggy, write some integration tests. Once you get to this phase, you should be unit testing your project work.

Only after those two are completed can you start refactoring. Treat it like TDD. Keep an eye on larger goals like 12 factor conformance. It may look pie-in-the sky at first, but it will give you ideas on what to focus on. Main advantage of refactoring over ground-up re-writing is, you don't have to sell it to your boss. You just do it, in and around normal project work.

The biggest hurdle is the first step. It's scary to fuck with deployment. The approach I've come up with is to fork the codebase and rebuild the tooling on top of that, deploying first to staging, then to production, alongside the current working copies. Once you're satisfied flip the switch. You may have to flip it back, but at least it will be easy and instantaneous.

These lessons are from my ongoing project to modernize an ancient Rails 2.3.5 website running on Debian Lenny. Linode doesn't even offer that OS anymore, I had to cannibalize a server with an EOL app on it for a staging environment. I can't use Vagrant because there aren't any Lenny boxes.

It's long, arduous and slow. I fucking love it.



What do you do when management won't allow Unit/Automated testing because it takes time away from writing user facing code?


Do it anyway. The thing about testing is that it's a skill that you have to work on. You have to know what to test and how. Testing shouldn't affect the speed at which you write code at all.

The reason your boss is saying no is because you had to ask him. The reason you had to ask them is because you know it will take more time than it will save, at least at first.

You have to learn this skill somehow, and the best way to do it is on something that matters rather than with a side project. So write tests at your job, and learn the skill of testing on your employer's time, without their knowledge. Or on off hours if that makes you squeamish. But learn the skill, it's important.

Then, when you refine your testing work flow to the point where it makes more sense to test as you write, don't bother hiding it anymore. When they ask, show them your workflow and how it's not taking up too much time and list out the benefits of testing. If they tell you to stop anyway, take your new skills and find a new job. You're growing past the ability of your current job to challenge you.


Can you explain or give some links on "12-Factor Conformance"? I did a few quick searches and nothing popped up.


Please don't sign your comments. (http://ycombinator.com/newsguidelines.html)


A company I worked for had a .NET application that was critical to basically all of corporate ticketing and resource management. It would no longer compile as a whole except on one guy's laptop and this didn't seem to bother anyone but me. Instead each file was modified and placed on the server to be compiled at runtime (causing a massive slowdown). It was mind boggling just how little people cared and how critical it was for day to day operations.

Eventually that laptop was destroyed in a bizarre accident (dropped at the airport security checkpoint was the claim) and last I heard they were regularly backing up the directory on the web server and still dropping files in to compile at runtime.

This is what happens when someone writes something and no longer has responsibility to maintain it or document.


Odds are Gates didn't write the complete code, probably more of the blueprint. Also worth noting- the original implementation of FAT was pretty basic/fundamental. Could one even call it an application?


According to Wikipedia:

> The original FAT file system (or FAT structure, as it was called initially) was designed and coded by Marc McDonald,[9] based on a series of discussions between McDonald and Bill Gates.


yeah because writing a filesystem was pretty trivial back then.


I think the real lesson here is that every time you add a feature, you are adding development overhead to every other feature, leading to exponential development time. I'm sure if segment tuning was one of the first thing to go into windows, it would have been much simpler to implement. However, at later stages in the development you have to account for how different, already-written applications are loading their memory, and thus the feature needs handle far more edge cases than it would have if this was an earlier-implemented feature. I'm not saying the overall time to develop segment-tuning would be shorter, in the long run had it been developed first, but the perception of development-time for this feature would be shorter (since any segment tuning edge cases that you ran into while writing an application would be considered part of the app's development time, not segment-tuning's development time).

I'm sure that if Mr. Gates had implemented FAT at a later date, he would have needed a much longer plane ride.


I don't think segment tuning is the kind of "feature" your lesson needs as an example. Segment tuning is just a link-time optimization phase. The people doing manual segment tuning were just producing an ordered list of the already-written functions, to be used as input for the linker. These days, it would be implemented as a completely automated process using profile-guided optimization. It's not something that you can do early in the project and get it over with, because any time you do anything to change the call graph of your code base or the size of a compiled function, you need to re-tune.


Ah, I didn't realize this was done as a manual process back then. I assumed the engineers were working on an automated system to do the segment tuning for them, and that system was the new "feature" that Mr. Gates thought the engineers were wasting too much time on. Feel free to re-read my above comment with "segment tuning" replaced by "some other feature." Thanks for the correction.


I'm confused by your comment. "Segment tuning" isn't a feature, it's a process. One only did it for performance gains. That one would have to do it for performance gains is a result of the fact that code segments were loaded in 64 KB chunks. It's another instance of trying to take advantage of spatial locality to achieve better performance.


Another possible conclusion from your analysis is that a proper architectural separation of concerns (which could allow one to bang out code like he's still on that plane) has not been practiced in Windows, and by the time Bill was offered the opportunity to code something in Windows it was already hopelessly entangled.


Not exactly. I am a huge fan of code modularization, but what modularization offers you is the ability to decrease the exponent (i.e. to be devTime^1.01 instead of devTime^2). Some features though (including segment tuning, from the sound of it), will touch a ton of other features regardless of how well you modularize your code.

I work at a networking company (Arista), and a lot of the interesting problems come from this sort of interaction. Our entire OS was built so an agent would be resilient to changes in another agent, and this modularization means there is very little "spaghetti code". However, when you are building a feature (say, a new routing protocol), you have to be extremely conscious of how it interacts with everything else: various reliability features (i.e. Stateful switchovers), configuration features (saved configs, public APIs), resource contention (TCAM utilization), etc. etc. If that new routing protocol was the first thing we implemented on our switch, it would be a complete breeze. In the context of other features though, this becomes a more intensive project (though in codebases without proper modularization you'd find this task "herculean" as opposed to "intensive").


pg's whole thesis for lisp was that it reduces the increasing time of modifying larger codebases to o(log n) rather than o(n^k) or o(n) (n being size of codebase)


That is true of immutability languages like Clojure, O don't see how that is possible for a mutable dynamically typed language like CommonLisp


> I am a huge fan of code modularization

These days that's no problem. But back then computers were slower and memory much tighter. Making clean interfaces to integrate things made the code too slow and large.

On the other hand, projects were smaller, so there was less need.


Or maybe not. The modern version of segment tuning is cache friendliness. Also, with modern virtual memory, function layout may be important in order to minimize page faults.


Completely unconnected to the discussion, and excuse the incursion, but your (Aristas) Switches _rock_. Got 8 at work - loving the 10G interfaces.

Thanks, carry on. ;)


Gates is one of those rare people who is at a time a good developer, good manager and a cunning businessman. I started to appreciate him after watching Triumph of the Nerds documentary series. I literally loled for 5 minutes after watching Gates in an event parodying IBM which was organised by Jobs http://youtu.be/riyAe4BKAng?t=20m1s.


Watching them so young is such a trip! Thanks for the link!


Wow, Larry Ellison predicts mobile web at 47:00.


More Chromebook than Mobile Web.


Thank you for the link. I didn't know Jobs and Gates where fighting IBM together back then.


TLDR : Bill gates is a Hoss.


From 12BitSlab in article's comments

> I don't mind if billg gets a little arrogant at times. One merely has to look at how he wrote the ROM code for the Altair to realize his abilities.

> Also, ALL of the concepts embodied in modern tablets and smartphones were "invented" by billg when he wrote the code for the Tandy 100. Things like "instant on", data stored in non-volitile memory, small productivity apps, continue from the same point after power down, etc. Persoannly, I put billg in the Top 5 of all-time CS people who contributed to computing.

This is an interesting claim, I was always under the impression that while Gates could code, he wasn't at all a CS giant.


When I was an undergrad (a year ahead of Gates, 72-76), Harvard had only "intro to programming" undergrad courses, so those of us CS (well, applied math) majors who arrived at college already competent at programming pretty much just took all grad CS courses for 4 years. The faculty were enlightened enough not to care that we weren't grad students.

Gates was certainly one of the brighter undergrads in those courses. I don't know if that makes him a "CS giant," but he was no slouch.


Awesome story, thanks.

While you guys were coding away at Harvard, I was not yet able to properly focus my mind, and so took to running around the streets of Cambridge(port) in diapers instead ;-)

p.s. from the above one can assume I was either born in '72 or was taking far too much acid for my own good.


He had the fastest algorithm for pancake sorting for a long time. Not the coolest thing, but still pretty impressive and not just can code.

http://www.npr.org/templates/story/story.php?storyId=9223678...


You should look at the 16-bit VM Woz wrote for the Apple ][

http://www.6502.org/source/interpreters/sweet16.htm

Anyway, Engelbart, et al, had figured out all this stuff 10 years earlier.


I'm old enough to know he didn't carry a computer on the plane....


But too young to know writing code and speccing data structures on paper (and/or napkins...) was quite common before airplane-friendly computers were available? (And for a long time after, really; ubiquitous ownership of laptops is a quite recent thing).


From this Woz quote it was pretty common:

"I wrote all my code on paper in hexadecimal. I couldn't afford an assembler to translate my programs into hexadecimal bytes, I did it myself. Even my BASIC interpreter is all hand written. I'd type 4K into the Apple I and ][ in about an hour. I, and many others too I think, could sit down and start typing hexadecimal in for a SMALL program to solve something that occured or something that somebody else wanted. I'd do this all the time for demos. I certainly don't remember which hexadecimal codes are which 6502 instructions any longer, but it was a part of life back then."


When I started my programming career, my first boss did not know how to use a text editor. He could perform a randomizing routine in his head but he trembled at using a text editor -- he still used punched cards and/or a mainframe utility that emulated a punch card. And lots of programmers coded first on columnar grid paper (shades of green and white, demarcations at 8, 12, 16 to help you indent properly.


My community college still had us writing C on huge tablets of that IBM grid paper in the late 90s. I can't imagine writing huge amounts of production-worthy code that way.


Possibly stupid question : Why did he not write the assembler himself ?



I used to fill spiral notebooks with code back in the 70's. This was back when I'd have to go to the "computer center" to use the actual computer.


I still prefer to write on paper on planes.


Me too - although if it's code, then it's going to more like pseudo code or simply "concepts" and ideas.


For that matter, pocket calculators programmable in BASIC started becoming quite inexpensive in the mid-80s.


According to this, Gates did some of the design, but Marc McDonald, employee #1 at MS, helped with the design, and did the coding in 1977:

http://en.wikipedia.org/wiki/File_Allocation_Table#Original_...

Like a lot of old stories remembered after-the-fact to make a point, it seems that the truth is more complicated.


Thanks for pointing this out. It's all too common to overlook employee contributions to projects.


I think the point of the story wasn't proper attribution, it was a way for Gates to prod people to work faster.


By misappropriating their work?


I coded the entire linux operating system in one hour. Why can't you do something just as amazing with the same amount of time?

"but appropriation of work isn't relevant at all!!!!!" Of course it's relevant. The prodding doesn't work if the justification for the prodding is a lie.



According to one book (Hackers?), he wrote the loader code for BASIC for the Altair on the plane to NM.

If I remember right, he did have a Compaq "portable" he lugged around in the early portable days. Something like http://oldcomputers.net/compaqi.html

It obviously came out after DOS / FAT.


That was Paul Allen, if I remember correctly.


I'm pretty sure it was Bill Gates (although Paul Allen could of too). I thought the source was a Chaos Manor column back in the day.


nah... they decided Paul should go because Bill still looked like he was a high school sophomore.

http://harvardmagazine.com/2013/09/walter-isaacson-on-bill-g...


Pen and paper works just fine on a plane and I'm old enough to know that back in those days lots of coding was done that way.

At one work place, I spent most of the day away from the computer (terminal) waiting for the operational boys to get their stuff done.

I would spend most of my coding day scribbling out code changes onto the fanfold printout.


And today quite a few people complain about having to code on a whiteboard while at a job interview. How times have changed.


For a lot of people, writing on a whiteboard with 4 prospective employers breathing down their neck is totally different than sitting on a plane or in a quiet corner alone with some paper to write on.


Or pen or paper


> pen or paper

How did he do it with just paper? Origami? ;-)


Obviously he folded it into little bits. :)


"pen or paper", not "pen xor paper" ;).


If your not coding in blood you're taking the soft option.


blood.


People didn't write computer code in IDEs back then, or even directly in text editors really.


So the FAT story may be an exaggeration, but does Brandon Eich's Javascript-in-10-days (http://www.quora.com/JavaScript/In-which-10-days-of-May-did-...) still count as a "STFU-I'm-the-boss" feat?


You have a typo: Brandon Eich -> Brendan Eich. Not nitpicking, am compelled to point out since it is a name.


So that explains it...

;)


How do we explain C++?


Compatibility with C and its toolchain.


I think that's a weak response. obj-c was just as compatible and a far less horrible language design.


Actually I don't have any praise for Objective-C's design.

And it also suffers from the same issues that pollute C++'s design.


Am I the only one who didn't interpret the quote as him showing off? If I used that line, it would be a way of telling them that the problem of segment tuning is an artificial limitation imposed by a format that hasn't gotten enough attention. "I wrote FAT on an airplane" would be my way of saying, "don't come up with better ways to work around my half-assed implementation. spend time improving the implementation so segment tuning isn't even something you have to worry about."


FAT and segment loading have nothing to do with each other. Gates was just bragging about having written something complex in a short period of time.


Still, he may have been pushing his team to do better.


When he says "He wrote FAT on an airplane" he means with pencil and paper since FAT is so old that there were no laptops to do that on. Which maybe he figured out all of the specs, but he didn't put code down and debug it. Of course in those days writing it on paper was quite common before people typed it into the computer. And that is quite impressive in and of itself. Most of the time people treated that activity as if they were coding so they were very precise. Now a days we're fast, but we aren't as precise.


You can write code on paper and debug it in your head, running a simulator of the target environment in your brain. There is no "as if"; this was actually coding.

Coding is something you do in your brain, not in an editor.


Bill Gates complains that people aren't doing real programming and people berate him to shut him up. Steve Jobs throws the first iPod into a fish tank and he's revered as a God.

I guess it's no wonder why the mainstream of Apple and Microsoft is what it is.


Was it one of those planes that flies for six months?


The initial 8-bit version of FAT (which I personally do not know the history of, but Wikipedia suggests Gates is probably taking more credit for than deserved) and its rough successor, FAT12, are very simple structures. Weird products of their time and place, but simple nonetheless.

Coming up with the blueprint for such on a flight of at few hours seems quite reasonable for someone with the requisite knowledge.


Maybe a fine line between "writing" and "designing" and "prototyping". :-)


If he'd kept one foot in the codebase - he might not have made so much money - but his software legacy might have turned out differently.


I'm not sure that Microsoft's is a bad legacy. They did a whole hell of a lot to put programming and office tools in the hands of normal people.


I agree. Regardless of what they've done, they largely accomplished their vision. Their vision was to put a PC in every home. They started with such a lofty goal and delivered. The first thing I ever learned on a pc was typing a: <return> to load a floppy on MS DOS.

It seems this is largely their problem today - they no longer have a lofty goal to focus their energy on. Most companies have one. Google - organize the world's information etc. etc.


"Tony Stark was able to build [a miniaturized arc reactor] in a CAVE! With a box of scraps!"


I'm sorry, sir. I'm not Tony Stark.


To be honest I can't afford to commute and do nothing, I have to plan new features for my products, redesign existing ones, or at least read a tech book/article, otherwise I feel really frustrated because I dislike to stay still.

The best ideas I have ever had happened during my commute, so I feel familiar with the "I wrote FAT on an airplane" statement.


This is why I'm envious of cities with real transit. Driving sucks.


So that's why everyone uses NTFS...


FAT8 was written in 1977. Nobody is using any filesystem written in 1977 these days.


EXT4 fs and other fs , NTFS is a little pony if you go to servers world


NTFS is way more sophisticated than ext2/3/4. ZFS is probably got it beat, but it's actually a pretty great filesystem. Don't confuse 'windows' with 'NTFS'.


ext3/4


Isn't this basically Bill Gates exemplifying the "Rockstar Programmer" persona?

"I did X in Y time so why don't you follow my lead" may have been a motivational tactic at the time, but it did backfire. I'm forced to wonder if this is how the motif began.


About "optimization", Gates likely had a point. That problem of assigning code blocks (subroutines/functions) to memory segments does suggest a significant role for optimization.

What is optimization? For a very sparse description, for the set R of real numbers, positive integers m and n, functions f: R^n --> R and g: R^n --> R^m, find x in R^n to solve

minimize f(x)

subject to

g(x) >= 0

So, set up a mathematical description of the problem based mostly just on the 'cost' function f, that is, what want to minimize, and 'constraints' g that keep the solution 'feasible', that is, realistic for the real problem. Then look for a solution x. Yes, easily enough such problems can be in NP-complete and still difficult otherwise.

Likely what Gates wanted was function f to be the execution time and function g to force honoring the 64KB limit per segment, etc.

Then a big question would be, assuming what software to have its segments assigned?

So, for an answer, just take some Windows and/or Office code and get the total execution time on a typical workload.

Might also want to be sure that on any of several workloads, the optimal solution was no worse than some factor p in R of the best solution just for that workload. Then keep lowering p until the f(x) starts to rise significantly and come up with a curve of f(x) as a function of p. Present the curve to Gates and have him pick a point.

Gee, should we rule out having the same subroutine/function in more than one segment? Hmm ....

Just where Gates learned about optimization would be a question, but he had a point.


You can read it a couple of ways, you can take it as Gates showing off that he did something hard quickly, which is certainly valid, or you can take it as Gates saying 'stop fiddling whilst Rome burns and get it done'. You can hammer out complex ideas quickly and lose your life twiddling the details, or just do it and move on.

Or more likely he was being a bit of a jerk.


Am i not getting something i thought that fat was created by Marc Macdonald? http://en.wikipedia.org/wiki/File_Allocation_Table#Original_...


It took me a while to realize that it didn't mean writing code that ran aboard an airplane, but code that was written while on an airplane. Somehow the latter didn't even cross my mind as something that could be considered impressive.


And they got a patent for it? :D


IANAL and I haven't read up on everything, but I thought it was just the long filename support that was patented. My understanding is that if you just support 8.3 names it's unencumbered.

Edit with further detail: The thing that's unique about LFN are the rename and delete behaviors. Lots of filesystems support more than 8.3 or multiple names for files (hardlinks) but I don't think any of them have alternate names for files that "stick" when you move them into another dir, or that behave somewhat cleanly when a non-LFN-aware OS does so.


He was talking about FAT8 not FAT32.


So he got out a notepad and pen, then wrote the word "FAT"... whilst on a plane. Awesome!


Skip to the last 2 paragraphs of this article. The rest is filler.


[deleted]


read the article after you finish reading the headline.


I did, did you?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: