Hacker News new | past | comments | ask | show | jobs | submit login
ASM (grumpygamer.com)
117 points by steveridout on April 12, 2013 | hide | past | favorite | 41 comments



I had the exact same experience, only 25 or so years later.

I learned how to program in Basic on a Casio25 calculator, and later on a TI82 (the Casio couldn't do key polling, and my parents wouldn't buy me a new calculator just for programming. So I broke it on purpose to get a new one. Sorry mom! I later got a TI-83, while all my friends had overpowered TI89-92; the underpowered nature of the TI-83 felt more romantic and elegant to me.)

2 friends and I got really into programming, making tons of games in BASIC. I made a grid based SimCity clone, a turn by turn clone of [1], a very slow dungeon crawler inspired by Diablo (my parents were fairly antivideogames, so my way of experiencing the games I wanted to play was to program clones of them), a flexible "engine" for text adventures (I was really into pen and paper RPGs but none of my friends were into it), etc.

Then one day we discovered assembly, and our mind was blown. We would sneakily use the library computers to print z80 reference sheets; it was awesome.

Middle & high school were awful times all around for me, but this definitely made it better.

[1]: http://en.wikipedia.org/wiki/Star_Wars:_Galactic_Battlegroun...


>I started to realize that assembly language was real programming. BASIC was just an imitation of programming.

That's exactly why I couldn't be happy if I only knew some high level languages like Java / Python / C#.

After learning an assembly language and C I feel much more comfortable using any other language. You know the real thing.


There is no "real thing". Now you'll never be happy until you know you're filling as many execution units as possible, or that your loop targets align on cache boundaries, or that you stuff as many ops as possible across the likely-to-be-in-DRAM loads. Then you'll learn Verilog and start worrying about starvation issues in your own pipelines. Then you'll get distracted by a coffeescript REST app and worry about getting your MongoDB replication set up robustly.

It's all good. Some people are happy in a single niche, some never are.


Ah, yes. In a parallel programming class at the moment and now I can't stop thinkin about cache boundaries, volatility, weird efficient instructions to use.. Is this false sharing and throwing out the cache line? Where can I use rsqrtf? Am I utilizing the SM fully? Is this busy-wait killing the memory controller?

It goes on forever.


Both of you talk as if generally, people never worry about such things. It seems unusual to me. Is it because the first language I learned was 65816 assembler?


People never worry about machine-level micro-optimizations in exactly the same that people never worry about MVC web app frameworks, which is to say "most people almost never" for some appropriate metric for "most" and "almost". It's a niche. My point was just that some people are happy in just one, while others (the poster I was replying to, and myself among them) aren't.


In domains where fine-grained performance is not paramount (i.e. most programs), no, people don't worry about that sort of thing very much. How do you even go about reasoning on cache lines in a Ruby on Rails app?


Sure, and this lasted right up until we got GPUs, where you can take advantage of something even faster than general purpose ASM - specially developed hardware. The death nail in using ASM was the library. You only need to develop your ASM optimization once into GCC or a special purpose library, and then you can just happily re-use that from a maintainable language.


Not really, code in low-level GPU shader language (I believe that's no longer allowed though) and compare it to HLSL. Compare intrinsics speed with inline assembly.

Sadly, we live in a world where the lie that compilers can optimize better than humans is widely spread. The reality is, compilers can optmize better than your everyday-javaguy (the kind of person who believes you can just throw more processing power at a problem while the programmer just needs to abstract ad nauseam).

I like using frameworks, libraries, high-level languages as much as anyone in here, but I know ASM will always be faster, just not always suitable for the issue.


It rarely matters enough to justify amount of work needed.


That's been true since the first compiler. There are always going to be people who want the extra speed and are willing to work for it.


Just fyi, the phrase is "death knell" not "death nail."


I have had a similar experience: compiling my little text adventure in Liberty Basic took almost a minute. When my older brother told me that assembler DOESN'T NEED TO COMPILE, I was completely awestruck. I did write a text adventure in assembler, but it didn't have the same slick "windowy" feel that I was so in love with 15 years ago. But I returned to LB as a better programmer.


>But I returned to LB as a better programmer.

In what way? (Not meant to be accusing, just curious)


but assembler does need to compile


No, it needs to be assembled.


Semantics. An assembler parses a human-readable representation and emits an executable one. That's a "compiler" in most contexts, and certainly there have been many things called "compilers" that translate simpler things than general purpose assembly code.

Obviously the distinction being made is that "assemblers", even in the days of 8 bit CPUs (and earlier!) have always been able to translate at or near the speed of the I/O required to read the data. So there's an intuition that says they're "instantly fast" and thus deserve their own name.


Translating assembly to machine code is a trivial compilation, but technically it is still compiling.

Compiling is just translating source code in one language into code in another.


Long-standing convention is assembling != compiling.

You touched on the reason: compiling is translating one language to another. Assembly code is not changing language, just replacing symbols from human-convenient to machine-convenient, more like encryption.

You wouldn't consider Pig Latin a different language from English, it's just a straightforward mangling thereof.


just replacing symbols from human-convenient to machine-convenient

Its not quite a one-to-one mapping though.

For example, there are assembly instructions which map to more than one opcode, depending on how it is used and opcodes can have various flags that change its encoding (prefixes and such), like the MOV x86 instruction. Beyond that, it is also rare to find assembly code which does not make use of macros or assembler-implemented pseudo instructions.

http://stackoverflow.com/questions/2546715/how-to-analysis-h...


There is a one-to-one map between a mnemonic plus its operands and the resulting opcode, which is the relevant criteria. The use of macros doesn’t change this (though the use of pseudo-instructions can, if they force islands to be created for constants or jump tables).


For nearly any other assembly language, like say System 390 or PowerPC, it's a straightforward 1-1 mapping. Macros and pseudo instructions more fall into the real of text pre-processing than compiling.


Nit-pick Power is many to one, not one-to-one (there is no "shift" machine code, but there is a rotate-and-mask machine code that shift operators are translated into).


I've written part of a compiler for Power and never used the shift machine code, maybe it's just the setup I was using, but I use rlwinm[.] all the time.


It's only barely more 1-1 than C to assembly is without any optimization steps. When you see a C program, you have a very good idea what the assembly will look like.


transliteration vs translation


not really... Assembly is a one-to-one mapping with machine code. 2 (fast) passes, and you are done. Compiling entails far more complex operations. while I understand that you are saying that the action of assembling is just trivial compiling, I think that compiling implies, and perhaps even means, certain operations and steps that just aren't present in an assembler.


I had a very similar experience. I was filling the screen with white pixels with PSET in Basic. Just a tight loop setting each pixel. It took a handful of seconds.

When I rewrote it in assembly and the screen was instantly filled with white pixels, I couldn't believe it was true. You just can't set pixels that fast! I went back to "debug" my assembly code by setting the pixels to different colors to convince myself that the assembler didn't somehow "cheat" by just setting the background color to white. And sure enough, assembly just turned out to be orders of magnitude faster.


Exact same experience here. QBASIC allowed you to put ASM code in line with BASIC, like so.

  y=199  
  x=319  
  c=15  
  def seg = &ha000 + (&h14 * y)  
  poke x, c
Of course, functions like CIRCLE were already much faster since the entire function was already in ASM, instead of having to go in and out of the interpreter for each pixel. You'd wrap the above code in a SUB routine and replace all your PSETs with it. I even coded my own sprite maker that used the mouse in a similar manner and made a pretty impressive looking Mario clone with the appropriate functions I learned in math class. Pythagorean theorem was very useful when working with grids. I didn't understand why everyone wasn't doing the same thing and why everyone hated math.

Imagine the teachers surprise when on the first day everyone is learning FOR loops and I'm making a RND bump map with directional lighting. This was 1997 so we were still using Yahoo to look up the ASM offset codes, which worked just fine, you'd just have to be good at making your queries or you'd have to dig through a couple of pages of results.


To this day, I find myself deleting comments or whitespace under some misguided pavlovian notion that my code will run faster.

Ouch!

btw, I think "fill the screen with @'s" must be the "Hello, world!" of Commodore machine-language programming. (Back when you could call it "ML" and not get it confused with a functional programming language, too.)


That's funny, I always get confused and disappointed when hearing/reading people talk about "ML" only to realize they're talking about "machine learning" and not Milner's ML ... I hadn't thought there might be yet another overloaded definition.


Actually it shouldn't matter even for BASIC

More 'advanced' versions transform BASIC into bytecode that's interpreted, stripping all the comments

But yeah, for the early systems...


This is very interesting

I started with the MSX. Alas, the literature was scarce, but I could play some tricks.

I couldn't go too far with assembly though. 1 - see point above. 2 - There were several ways of doing a "system call" but I think one way only worked into "raw programming" (that is, running from basic) another one if you had DOS.

Unfortunatelly I coudn't make that work.

Another thing is that the VPU was not memory-mapped (maybe on MSX2 it was), you could do 'text mapping' sure

One nice thing I remember was reading bitmaps for the font in the memory and then printing it as a sequence of (8x8) characters


Exactly what it was like for me, except Apple ][ (6502 FTW!).

I think part of the excitement was also the limited access, there weren't that many computers and you had to wait for the next issue of Byte (or c't, in my case). I spent many hours at the magazine racks reading computer magazines I didn't have the money to buy.

Having said that, things are great now too. The low-level stuff you can do on a $20 HW platform are fantastic.


Okay, I swear I've read the exact story of filling up the screen with @ signs before, but it wasn't on Ron Gilbert's site.


I was thinking the same thing; my initial thoughts were Jeff Minter or James Hague, but Google is not showing anything relevant.

It might simply be that a common first-time benchmark exercise for programmers moving from BASIC to assembler is to fill the screen with characters or pixels. I have a sneaking suspicion I did the same too.


The story I remember took place in the UK.


I am so done with websites with monospace fonts for body text. Please put your natural language content in a format that is pleasing to read, or I'm not going to bother.


According to the CSS it's '"Lucida Grande", "Lucida Sans Unicode", helvetica,clean,sans-serif;', so this might be a browser/setting kerfuffle. It works fine for me (FF/Chrome/IE on Win7).


If you are "so done" you can choose not to read it. Or you could change the fonts by hand. Or you could use some addon (like Readability or Evernote Clearly) to make it better. There are many options available to make you enjoy the content wich is really what matters. Aren't you a hacker?


I feel the same thing about HTML5 with dynamic everything and pointless whitespace and extra borders. Completely unnecessary. To each, their own.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: