Hacker Newsnew | past | comments | ask | show | jobs | submit | thomasjb's favoriteslogin

Athlons official announcement June 23 1999, official shipping date August 17, 1999. A week after announcement reservations started at Akihabara https://akiba-pc.watch.impress.co.jp/hotline/990703/p_cpu.ht... https://akiba-pc.watch.impress.co.jp/hotline/990703/price.ht...

"AMD Athlon 500-600MHz (bulk) price display. The product is scheduled to arrive in mid-July, and reservations are being accepted. However, there is no specific arrival schedule for compatible motherboards yet."

"the K7 revised "Athlon" has been given a price and reservations have also started. The estimated price is 44,800 yen for 500MHz, 69,800 yen for 550MHz, and 89,800 yen for 600MHz."

Those were Pentium 3 450-550MHz prices.

A week before official AMD shipping date retail Athlons arrive in Japan https://akiba-pc.watch.impress.co.jp/hotline/990813.html

"AMD's latest CPU "Athlon" will be sold in Akihabara without waiting for the official release date on the 17th is started. All products on the market are imported products, and 3 models of 500MHz/550MHz/600MHz are on sale. The sale of compatible motherboards has also started, and it is possible to obtain it alone, including Athlon"

https://akiba-pc.watch.impress.co.jp/hotline/990813/p_cpu.ht...

~$380-800 depending on speed.

https://akiba-pc.watch.impress.co.jp/hotline/990813/newitem....

Picture of one of the Akihabara stalls full of CPUs being sold retail before official AMD launch date :) https://akiba-pc.watch.impress.co.jp/hotline/990813/image/at...

For reference in US 4 days later on August 17 Alienware was merely teasing pictures of Athlon system https://www.shacknews.com/article/1019/wheres-my-athlon According to Anand "OEMs will start advertising Athlon based systems starting August 16, 1999" https://www.anandtech.com/show/355/24


Yep, I no longer make regular Klein bottles. I'm a so-so glassblower. (Indeed, most glass workers would consider me a good physicist. Physicists would say that I'm a good computer jock. Computer people think that I know a lot about math. Mathematicians feel that I'm a good glassblower.

Keep 'em all guessing.


> given choice between complexity or one on one against t-rex, grug take t-rex: at least grug see t-rex

I think about this line at least once a week


Doug Hoyte looks at anaphoric macros in his book Let Over Lambda.

Hylang also has thise built in. IIRC it was the anaphoric threading macro "a->".

Apparently this language construct is called an anaphora.


I have a story like that! Back in my lab rat days, I dropped a scope probe and its grounded sleeve skittered down the back side of a powered-up, prototype memory board (with 240 individual RAMs on it), leaving a trail of sparks. After that it was dead.

Uh oh. We needed that board. What to do? Well, it can't hurt to try. We had "freeze spray" for debugging purposes, so got a bottle of white-out handy (what's that?), frosted up the board really well on the component side, powered up, and quickly marked the devices that defrosted notably quicker than the rest.

Got the solder station lady to replace all those parts and it worked again.

Old days...


Sounds like the real web scale was all of the AWS bills we paid along the way


> Arc was implemented on top of Racket

Originally on MzScheme, then later PLT Scheme. It was ported to Racket by the great kogir, IIRC.


I'm going to write a scifi story, the plot?

The lisp path won, Lispus instead of Linux, and we had AGI in 1997 due to code elegance.


> the official Judiciary of Germany [2] (???)

That's the e-ID function of our personal ID cards (notably, NOT the passports). The user flow is:

1. a client (e.g. the Deutsche Rentenversicherung, Deutschland-ID, Bayern-ID, municipal authorities and a few private sector services as well) wishes to get cryptographically authenticated data about a person (name and address).

2. the web service redirects to Keycloak or another IDP solution

3. the IDP solution calls the localhost port with some details on what exactly is requested, what public key of the service is used, and a matching certificate signed by the Ministry of Interior.

4. The locally installed application ("AusweisApp") now opens and displays these details to the user. When the user wishes to proceed, the user clicks on a "proceed" button, and is then prompted to either insert the ID card into a NFC reader attached to the computer or a smartphone in the same network as the computer that also has the AusweisApp attached.

5. The ID card's chip verifies the certificate as well and asks for a PIN from the user

6. the user enters the PIN

7. the ID card chip now returns the data stored on it

8. the AusweisApp submits an encrypted payload back to the calling IDP

9. the IDP decrypts this data using its private key and redirects back to the actual application.

There is a bunch of cryptography additionally layered in the process that establishes a secure tunnel, but it's too complex to explain here.

In the end, it's a highly secure solution that makes sure that only with the right configuration and conditions being met the ID card actually responds with sensitive information - unlike, say, the Croatian ID card that will go as far as to deliver the picture on the card in digital form to anyone tapping your ID card on their phone. And that's also why it's impossible to implement in any other way - maaaaybe WebUSB but you'd need to ship an entire PC/SC stack and I'm not sure if WebUSB allows cleaving an USB device that already has a driver attached.

In addition, the ID card and the passport also contains an ICAO compliant method of obtaining the data in the MRZ, but I haven't read through the specs of that enough to actually implement this.


I'm open to doing that. It would just be a nontrivial amount of work and there are many nontrivials on the list.

Yes, there's now an Arc-to-JS called Lilt, and an Arc-to-Common Lisp called Clarc. In order to to make those easier to develop, we reworked the lower depths of the existing Arc implementation to build Arc up in stages. The bottom one is called arc0, then arc1 is written in arc0, and arc2 in arc1. The one at the top (arc2, I think) is full Arc. This isn't novel, but it makes reimplementation easier since you pack as much as possible in the later stages, and only arc0 needs to be written in the underlying system (Racket, JS, or CL).

It also shrank the total Arc implementation by 500 lines or so, which in pg and rtm-land is a couple enclycopedias' worth. That was satisfying, and an indication of how high-level a language Arc is.


There’s more detail about the language proposals and their merits here: (very large PDF) https://apps.dtic.mil/sti/trecms/pdf/ADB950587.pdf

Five megabytes for the acorn64 rotating box, because it’s a GIF. And a bad GIF that can’t play at its intended speed for most of its rotation, and so has speed jitters (without delving: I presume it’s due to format limitations, as it looks to be using more than 256 colours; see also https://www.biphelps.com/blog/The-Fastest-GIF-Does-Not-Exist). Ugh. `ffmpeg -i acorn64.gif acorn64.mp4` shrinks it to under 350kB, looking about the same except that it now plays smoothly. And will use a lot less power.

(I noticed this because the page was loading unreasonably slowly for unclear reasons. In cases like this, a GIF <img> has a worse failure mode than <video>.)


As a deeper issue on "justification", here is something I wrote related to this in 2001 on the risks of non-profits engaging in self-dealing when they create artificial scarcity to enrich themselves:

https://pdfernhout.net/on-funding-digital-public-works.html#...

"Consider this way of looking at the situation. A 501(c)3 non-profit creates a digital work which is potentially of great value to the public and of great value to others who would build on that product. They could put it on the internet at basically zero cost and let everyone have it effectively for free. Or instead, they could restrict access to that work to create an artificial scarcity by requiring people to pay for licenses before accessing the content or making derived works. If they do the latter and require money for access, the non-profit can perhaps create revenue to pay the employees of the non-profit. But since the staff probably participate in the decision making about such licensing (granted, under a board who may be all volunteer), isn't that latter choice still in a way really a form of "self-dealing" -- taking public property (the content) and using it for private gain? From that point of view, perhaps restricting access is not even legal?"

"Self-dealing might be clearer if the non-profit just got a grant, made the product, and then directly sold the work for a million dollars to Microsoft and put the money directly in the staff's pockets (who are also sometimes board members). Certainly if it was a piece of land being sold such a transaction might put people in jail. But because the content or software sales are small and generally to their mission's audience they are somehow deemed OK. The trademark-infringing non-profit-sheltered project I mention above is as I see it in large part just a way to convert some government supported PhD thesis work and ongoing R&D grants into ready cash for the developers. Such "spin-offs" are actually encouraged by most funders. And frankly if that group eventually sells their software to a movie company, say, for a million dollars, who will really bat an eyebrow or complain? (They already probably get most of their revenue from similar sales anyway -- but just one copy at a time.) But how is this really different from the self-dealing of just selling charitably-funded software directly to Microsoft and distributing a lump sum? Just because "art" is somehow involved, does this make everything all right? To be clear, I am not concerned that the developers get paid well for their work and based on technical accomplishments they probably deserve that (even if we do compete for funds in a way). What I am concerned about is the way that the proprietary process happens such that the public (including me) never gets full access to the results of the publicly-funded work (other than a few publications without substantial source)."

That said, charging to provide a service that costs money to supply (e.g. GPU compute) is not necessarily self-dealing. It is restricting the source code or using patents to create artificial scarcity around those services that could be seen that way.




It's worth noting that the moral background (at least in terms of political philosophy in the US) was always rooted in practicalities. The Constitution even includes the qualifier "promote the Progress of Science and useful Arts." The moment a protection works against those goals, it's on shaky ground. And that ground is always in flux; there's a reason Thomas Jefferson noted regarding patents that "other nations have thought that these monopolies produce more embarrasment than advantage to society."

This is why copyright is shot-through with exceptions (for example, we give broad leeway to infringement for educational purposes, for what benefit does society gain if protection of the intellectual property of this generation stunts the growth of creative faculties of the next?). And that's usually fine, until, say, a broadly-exceptioned process to gather and catalog art and expression worldwide available online that was fed into neural net training in academic settings for decades becomes something of a different moral quality when the only thing that's changed is instead of a grey-bearded professor overseeing the machine it's a grey-templed billionaire financier.

(I submit to the Grand Council of People Reading This Thread the possibility that one resolution to this apparent paradox is to consider that the actual moral stance is "It's not fair that someone might starve after working hard on a product of the mind while others benefit from their hard work," and that perhaps copyright is simply not the best tool to address that moral concern).


For the few interested people, took the time to make a final version in Markdown with lots of links and some improvements/proofreading: https://gist.github.com/q3cpma/a0ceb6a7c6a7317c49704630d69b7...

A theming engine went in something like 15 years ago now; the default theme looks rather dated, but there are plenty of others. See https://wiki.tcl-lang.org/page/List+of+ttk+Themes (though the screenshots of core themes are from 8.5/8.6 - default in particular has changed a bit in Tk 9).

The "catch" is that the theming engine has its own new widgets, and so to be themed an application has to use the new API. Code from 1995 (or 2005) still produces GUIs from 1995.


There are some nice compilations of those, like

https://gist.github.com/willurd/5720255


WebRTC is lowkey one of the most underrated technologies out there, ESPECIALLY WebRTC data channels.

WebRTC data channels are basically “UDP on the web” but they have lots of controls to change how reliable they are, so they can be used as TCP style connections as well.

I still don’t fully understand why more people don’t use them over something like QUIC. (I think I’ve asked this question before here, but I wasn’t really satisfied with the answers)

I sadly switched off of using them, but mostly because the ecosystem around there is super underdeveloped compared to the ecosystem around QUIC/quinn. There is a LOT of boilerplate involved that feels unnecessary.

But, if you’re making a multiplayer game in the web, it’s basically the best technology to use cuz it already works. And if you use a library like libdatachannel or pion, you could make the game in Go or C++ and compile it for both Steam and the web!

Here’s a project I did with them that shows off compiled for both web and desktop: https://github.com/ValorZard/gopher-combat


Quick tip in case you don't have a good sense of what an ounce feels like: It's exactly 5 quarters (the coin).

So the difference between Babe's bat and today's is about the weight of 55 quarters (a roll and a half). Years of doing laundry at laundromats have given me a keen sense of how much handfuls of quarters weigh, so I find this actually pretty handy.

Just in case:

* A nickel is exactly 5 grams.

* A penny (1982+) is 2.5 grams.

* A dime is 0.08 ounces.

* A quarter is 0.2 ounces.

* 5 rolls of nickels = 1 kilogram

* 2 rolls of quarters = 1 pound.

Why did the U.S. Mint switch between even metric and even imperial units? Probably has to do with the changing metals in those coins. That said, the new small dollar coin is 8.1g / 0.286oz which makes no sense at all. It is, however, exactly 2mm thick.


As @aidos and @abraham_lincoln pointed out, an "Hello World" example is simply putting that string in a file named hello.cfm on the root of the web application and calling it with your browser, e.g. http://localhost:8080/hello.cfm

Before you can do that though, you need to install the CFML engine. I recently published a tutorial that shows how to install Lucee in Tomcat very easily:

https://www.youtube.com/watch?v=nuugoG5c-7M

http://blog.rasia.io/blog/how-to-easily-setup-lucee-in-tomca...

To make your Hello World example a bit more interesting, you can add the tag `<cfoutput>` with some dynamic content, e.g.

    <cfoutput>
        Hello World on #dateTimeFormat(now(), "mm/dd/yyyy 'at' HH:nn:ss")#
    </cfoutput>
See example at https://trycf.com/gist/226c10cbe74d4083743a617b25398224/luce...

What a wonderful article! Thanks for sharing!

I’ve been thinking a lot about how research — as both leisure and serious inquiry — is making a comeback in unexpected ways. One sign of that resurgence is the huge interest in so-called “personal knowledge management” (PKM) tools like Obsidian, Roam Research, or Notion. People love referencing Luhmann’s Zettelkasten method, because it promises to structure and connect scattered notes into a web of insights.

But here’s the interesting part: the tools themselves are often treated as ends in their own right, rather than as vehicles for truly deep research or the creation of original knowledge. We end up collecting articles, or carefully formatting our digital note-cards, without necessarily moving to the next step — true synthesis and exploration. In that sense, we risk becoming (to borrow from the article) “collectors rather than readers.”

I read Mortimer Adler’s "How to read a book" 30 years ago. It mentions severl levels of reading-ability. The highest, he says, is “syntopical reading”, which is a powerful antidote to this trend mentioned above. Instead of reading one source in isolation or simply chasing random tangents, syntopical reading demands that we gather multiple perspectives on a single topic. We compare their arguments and frameworks, actively looking for deeper patterns or contradictions. This process leads to truly “connecting the dots” and arriving at new insights — which, in my view, is one of the real goals of research.

So doing research well requires:

1. *A sense of wonder*: the initial spark that keeps us motivated.

2. *A well-formed question*: something that orients our curiosity but leaves room for discovery.

3. *Evidence gathering from diverse sources*: that’s where syntopical reading really shines.

4. *A culminating answer* (even if it simply leads to more questions).

5. *Community*: sharing and testing ideas with others.

What stands out is that none of this necessarily requires an academic institution or official credentials. In fact, it might be even better done outside formal structures, where curiosity can roam freely without departmental silos. In other words, anyone can be an amateur researcher, provided they move beyond the mere collection of ideas toward genuine synthesis and thoughtful communication.


When things like this pop up, I always think back to Joel Spolsky's review of the Nokia E71 and how he compared it to the iPhone 3G: https://www.joelonsoftware.com/2008/08/22/a-review-of-the-no...

The E71 was arguably Nokia's best phone ever; and it was indeed better than the iPhone 3G. But Nokia just couldn't keep up the momentum.


If you're looking for 'cool itineraries that end in Guam' there's nothing more interesting IMO than the United Island Hopper, a single flight number with 5 stops operated by a 737. Service is between Honolulu and Guam, taking around 14-16 hours, with stops in:

- Majuro in the Marshall Islands (MAJ)

- Kwajalein Atoll in the Marshall Islands (KWA)

- Kosrae in the Federated States of Micronesia (KSA)

- Pohnpei in the Federated States of Micronesia (PNI)

- Chuuk in the Federated States of Micronesia (TKK)

I believe passengers aren't allowed off in at least one, if not two, of the stops because they're basically just a US military base on a rock. [1, 2]

> The limitation is purely legal. The airline treaties distort pricing and competition, otherwise it should be possible to do a 24/48h stopover in Guam, Taipei etc., and even lower your overall ticket price if you're flexible about dates.

Note that you can do this, it just has to be on separate tickets if connections are under 24h. For a connection over 24h, the world is your oyster, so to speak, from a ticketing perspective. Unless you plan to stay at the airport (and only an option in some places with sterile transit) you may need a visa for the intermediate point.

[1] https://en.wikipedia.org/wiki/Island_Hopper

[2] http://www.gcmap.com/mapui?P=HNL-MAJ-KWA-KSA-PNI-TKK-GUM


Denthor, author of a famous series of democoding tutorials from the late nineties [0] was from South Africa

[0] http://www.textfiles.com/programming/astrainer.txt

(key quote:)

    [  There they sit, the preschooler class encircling their
      mentor, the substitute teacher.
   "Now class, today we will talk about what you want to be
      when you grow up. Isn't that fun?" The teacher looks
      around and spots the child, silent, apart from the others
      and deep in thought. "Jonny, why don't you start?" she
      encourages him.
   Jonny looks around, confused, his train of thought
      disrupted. He collects himself, and stares at the teacher
      with a steady eye. "I want to code demos," he says,
      his words becoming stronger and more confidant as he
      speaks. "I want to write something that will change
      peoples perception of reality. I want them to walk
      away from the computer dazed, unsure of their footing
      and eyesight. I want to write something that will
      reach out of the screen and grab them, making
      heartbeats and breathing slow to almost a halt. I want
      to write something that, when it is finished, they
      are reluctant to leave, knowing that nothing they
      experience that day will be quite as real, as
      insightful, as good. I want to write demos."
   Silence. The class and the teacher stare at Jonny, stunned. It
      is the teachers turn to be confused. Jonny blushes,
      feeling that something more is required.  "Either that
      or I want to be a fireman."
                                                         ]

I don't trust Time Machine any more. Years ago I wrote some shell scripts that help me mostly automate my complete system setup (with brew and friends). From time to time, I wipe my entire system and restore it with these scripts. Sometimes I have to adjust them, but mostly, they work without changes.

For my data backups, I use restic. Big advantage is, that I can read my backups even when I don't have a macOS system present (e.g. my only macOS system had a hardware issue and my Time Machine Backup was pretty much useless until I got a new one).

I know, this solution is not for everybody, but Time Machine corrupted my backups more than 5 times now and it feels so slow compared to restic, that I don't even think about retrying it after a new macOS release any more - even if my solution is a bit more work to do.


My advice to retain your sanity: stop using Time Machine and use Carbon Copy Cloner [0] instead. It works. It keeps working. It has excellent documentation for any possible backup and restore cases. It is transparent about what it is doing.

Time Machine works fine until it doesn't. And it won't tell you that a backup is broken until you try to restore from it. The errors are going to be cryptic. There is going to be no support and the forums are not going to help. The broken backup is not going to be able to be repaired. Time Machine uses the "fuck you user" approach of not providing any information about what it does, or doesn't, or intends to do or whatever.

If your data is worth backing up, don't use Time Machine.

[0] https://bombich.com


Only a few of the barriers presented by HTTPS:

Clock sync would be a requirement for access.

A recent device would be a requirement for access (not everyone can afford a new one).

Site admin keeping up with certificate registration would be a requirement.

Approval from the centralized certificate authority would be a requirement.

Server's self domain name matching accessed domain name would be a requirement.

These are all real scenarios where real people can be denied access to information that is crucial to them, up to the point of survival.

Just a few of the reasons why all my websites still allow HTTP.

HTTP is also way faster on slow connections.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: