Hacker News new | past | comments | ask | show | jobs | submit | leviathan's comments login

We’ve come almost full circle. I remember using this approach back in 2005 or 2006 by intercepting clicks and appending a parameter that just returned rendered html partials that just .innerHTML replace the main contents.


Sometime in the early 2000s, before AJAX was a thing, Adam Rifkin showed me a demo from his startup KnowNow that was achieving the then-mind-blowing thing of a web page updating itself without refreshing. Their trick was to keep the HTTP request open and the server to send occasional updates. More HTML. I don't think they could even manipulate the DOM back then, it may literally have been a page that just never closed the <html> tag.


This is how AJAX in Java Server Faces used to work nearly 2 decades ago. There were even more fancy technologies like https://www.icesoft.org/wiki/display/ICE/Direct-to-DOM+Rende...

I bet ASP had something similar too around the same time


I meant to reply to GP about partial updates rather than parent post about keeping connection open.


I did something similar (same time frame) that definitely manipulated the DOM, but I didn’t keep the HTTP connection open, I used hidden Iframes. Clicks got redirected to the iframe, which returned <script> elements that would replace DOM elements (in the parent frame) using their .innerHTML.

AJAX streamlined the process, but there were a lot of hacky ways to get similar dynamic HTML effects. I like the idea of keeping the HTTP connection open, but that seems quite resource intensive… particularly for the early 2000s.


This technique is called COMET. The very first web chat that I used way back in 1996 was COMET based, tho the term hadn’t been coined yet. It was the chat room on Sierra Online’s website. The app had two frames, one for the input field and another for the room feed, which was a long running connection. No JavaScript, just a never ending HTML page. I suspect the backend was written in Perl, as it was a cgi script.

Basically the server sends a multipart content type header, never sends a content size, and simply doesn’t terminate the connection, so the browser just waits patiently for more parts and renders them as they come. My team experimented with it a bit back in 2000 for doing what would eventually be termed JSONP Streaming. It was really cool tech, but we didn’t have a practical application for it.


Thank you, I'd forgotten that name Comet. Wikipedia has some more info: https://en.wikipedia.org/wiki/Comet_(programming)


Well, you could manipulate the DOM with that technique by sending <script> tags to do so.


Internet Explorer 5 introduced XMLHttpRequest (ActiveXObject) in 2001.


It was fairly easy to communicate between frames with JavaScript, so before this was widely adopted, I remember the way we used to “fetch” new data was with an iframe that always had a meta refresh tag in it, and the server response would include JavaScript that would call into functions from the main window passing along any data that was needed.

I might be wrong, but I think even early versions of Gmail did something similar.


Below is an interesting read about the history and evolution of XMLHttpRequest.

Started in 2000 in Exchange/Outlook web and took a year or two to become prevalent in the browsers and other places.

https://medium.com/@mohamedtayee3/make-an-http-request-in-ja...


Yes, but developers weren't widely aware of its power until the launch of Gmail in 2004, and the term "AJAX" wasn't used until 2005.


People were definitely experimenting with it before that. Developers were mostly using it to load direct server-side snippets of HTML, dynamically. It took the evolution of SOAP and (especially) JSON/RESTful APIs for it to start seeing mass adoption as the combined technology of "AJAX".

Prototype.js and jQuery had a big hand in the latter.


what made Gmail memorable was cross browser ajax. Prior to, you had things like OWA but it only worked right in IE.

The big inbetween OWA and Gmail, was Oddpost, which was also IE only, and was purchased by Yahoo three months after gmails April Fool announcement, to become Yahoo Mail. In a roundabout way, you can argue gmail was a copy of what was eventually Yahoo Mail.


Rather than come full circle, I think it's more accurate to say that the different web UI paradigms are expanding their scope. Server rendered frameworks are making it easier to do client interactivity, and React is making it easier to do server rendering (via RSCs).

For highly interactive web apps you would still benefit from a SPA since that's what they're designed for.


I feel like everytime Hotwire and similar libraries get posted we hear this, but the reality is simple CRUD w/ AJAX never went away… Sure theres loads of SPA projects out there but that doesnt account for every single website ever.


I'm doing something similar with [1] where pagination (scroll down, click on "Show more") is implemented by requesting the next page as an HTML fragment from the server, loading it into an invisible iframe and once it finished loading, appending it to the current page.

[1] https://animasci.com/


We used to let people browse and page through photo galleries by preloading the next columns or rows of photos, then on click URL swapping under the static HTML page grid. The order you replace the URLs gives a visual of a paging motion.

And for infinite pages, a long forgotten technique: for scrolling or continuing content, you could use "multipart" since the 90s, effectively streaming the additional page content to the user as you got more bytes to send them.


Yeah I was really into Wicket in late 2000s and while I personally liked it and hated JS, managing UI state and templates on the server was always kind of a foot gun for performance and scale in a team environment. Really hard for me to be excited about this approach again, especially after all the improvements to web, JS, and TypeScript in meantime.


I’d also like an invite if anyone still reading has any.


emailed you with one


well received, thanks a lot.


sure thing, enjoy!


> Cleaning it all up has been on my to-do list for the past 5 years

13 years and counting. But I’m sure I’ll eventually clean mine up


Take a look at aifiles: https://github.com/jjuliano/aifiles

Getting vicuna or alpaca for this could be the best decision for those that want to keep their data.

Could you imagine the space saving you can achieve by a system that constructs a real normalized duckdb database with zstd compression and join tables and all from your big dump of tar.xml.gz files? Automagically converting all of your media to AV1 and Opus to save space and remove any private codec reqs? Clear collation with directory choices similar to Linux style?

https://github.com/jjuliano/aifiles seems like one of the best ideas for data organization - just needs some polishing and local-only models


I just have to keep it all until I retire. I’m sure I’ll get around to it then.


Pro tip: I have noticed that most retired people seem to have even less time in their schedule. So I would not count on it :-)


Lol, yeah, my 90 year old grandmother was also complaining about that. But if it turns out like that I’ll happily take it.


The cube in the center doesn’t exist


Then it was me who was confused, thanks for the correction :)


It exists. (Well, as a matter of the mechanics of the physical object, there is no center cube, but the other parts aren't cubes either.)

But it is not one of the cubes that need to go to their correct locations. The center cube is not moved by any operation on the cube; it is always in its correct location.


Center cube is not moved, but it is rotated. However, since it does not have any colored faces then it's rotations are meaningless.


If the cubes have correct locations, none of them need to have colored faces. The location is enough.

(At least, that's true of the 26 outer cubes. You can't get them all into place without simultaneously aligning them correctly. I don't actually know if correct alignment of the center cube is also required, but it'd be my first guess.)


That's provably false - I witnessed it many times when solving the cube myself. Colored faces determine orientation, in addition to location. In the Rubik solving method that I know (a simple method for amateurs, not remotely close to professional speed cubing methods) there's actually a late stage where ALL the cubes are in their correct locations, except some of the third layer corners might have a wrong orientation - there's a dedicated sequence of turns that allows to solve that.


By the same logic, there are 6 more invisible cubes, 1 above each face. Where does it end?


The center cube is obviously part of the mental model of the cube, the Platonic object that the physical object is supposed to represent.

Nobody believes there are invisible cubes outside the Rubik's cube. That's not the same logic; that's you trying and failing to imagine a problem with the idea that cutting a cube into three parts along each of its three axes will generate a hidden central subcube.


The cube in the center would beg to differ.


Holy sql injection Batman


I'm not sure where you got that info from, but unless you completely tank everything, there isn't an automatic disqualification. Even if you fail two out of three questions, you'll still get a chance to meet with the screener.


Source: myself.

I did all tests in half the required time, 100% success rate, had a bug during the interview with the screener, in the last of 5 overall I've done, I probably could've fixed it with 2 more minutes but no, I failed and was asked to practice leetcode puzzles and apply again one month later.

Yeah, I don't think I will.

The thing that bugs me the most is it was obvious the interviewer wasn't an engineer. There weren't able to tell how close I was to the solution. I dunno, that method might work in some cases, but I've been doing software engineering for 16 years. I guess I'm not good enough.

Also fuck having to solve puzzles with a timer with someone looking over your shoulder. I have done emergency "servers are on fire" maintenance in the middle of the night for big customers and it's less stressful than that.


If you’re on a Mac, in safari you can just right click on the QR code and set it up in keychain, then you can just auto fill from safari, no need for a third party app.


You might want to keep at least your email decoupled from a particular Hardware+OS+Browser combo.


My anecdote is that once I was traveling on the 401 and stopped at an ONroute to grab a coffee. The line was extremely long and not moving at all, I had time to download the app, register, place an order, see it print out at the register and someone took it an made my coffee before the line even moved. I just quit the line, moved to the empty section where the mobile orders are and picked up the coffee as I was deleting the app.


Probably 5 VMs in total, one for each dev since they are 5 devs.


> 5 small dedicated VMs for each dev

Make more sense for that to have been 1 per each dev I still read that as 5 for each dev and there are 5 devs.

In that case I would say definitely dump them as they are pretty much useless then you are down to 2 VM's which doesn't really matter how you are hosting to be honest.


I've been struggling with a DNS downtime at Mediatemple all day. Is there a possible more global DNS issue?


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: