Hacker News new | past | comments | ask | show | jobs | submit | more popey456963's comments login

Pretty sure most applications don't need to support RFC 2177 to still be efficient. Long-polling for most IMAP servers requires one packet every nine minutes (to the best of my knowledge), which is literally nothing. It would take months of being run constantly to even get up to a megabyte of data being transferred. Some rough back of the envelope calculations:

- 10 minutes per poll, 1440 minutes in a day, 144 packets per day.

- 50 bytes per packet, 1,000,000 bytes to a megabyte, 20,000 packets required.

- Roughly 140 days of time, or 5 months to reach a single megabyte.

Aren't there some optimisations that are so minuscule it isn't worth implementing them?


Of course but it so satisfying to reduce the overhead further. The struggle to reduce a megabyte of overhead by half a megabyte is great while knocking half a megabyte off a gigabyte is often trivial.

I'm not arguing that anyone should try shaving the megabyte of course.


Must admit I am somewhat jaded with current mail applications and actually creating my own. Sylpheed is wonderful and the one I currently use, it's fast and works. My problem is with the GUI. Every single linked email client in this thread looks like it has come from 2005 and, unsurprisingly, most of them were created around then.

Hoping that maybe by using some more modern libraries, and aiming for a material design-esque client, I might be able to make something easy to use and reliable.


I wrote a mail-client, mine is console based with lua-scripting. I figured, when I started, that the task was simple, there are MIME-libraries, etc, out there, so to create a simple mail-client I only had to write three things:

* A view of all maildirs/folders.

* An index-view to show the contents of a maildir/folder.

* A view of a single message.

How naive I was. Broken email is everywhere. Things like IMAP support, GPG-support, and similar took months to get done. That moment when you're checking email in your spam-folder and your mail-client crashes? That happened far too often.

Writing a mail client isn't hard, but it is fiddly, and you don't realize it until you think you're done. (UTF-handling, malformed MIME-messages, etc etc.)

In the most sincere way possible: good luck!


I've upvoted you, because I think we need more e-mail clients. Actually, I was thinking about creating one myself as well.


Would you perhaps consider working together instead of on two separate projects? Half the workload for both of us to get the same final product!


I'm currently involved in writing a hex editor for big files (i.e. disk images), because I felt world needed one as well :), I already have a significant amount of code, it would be a waste if I'd switch projects now. Thanks for asking though. Working solo is hard, both for motivation and the amount of work that needs to be done. If I wouldn't have my current project I would definitely pair up with someone!


Is it open source and are you looking for contributors? If I can help, I'll contribute.


Honoured to have your first post Benjie. It is open-source and made in Electron, but I'm not sure it's upto much at the moment. I've mostly been working on getting IMAP and SMTP setup as opposed to any graphical work.

If you're interested in helping out, I'd love any help I can get. Email me at the address on my profile page and I can give you some more details. The current design looks as follows:

https://puu.sh/vmtT6/b803a88da5.png


I'd honestly disagree. Start small, don't expect to build something that works for 100k users and you'll be fine!

Take the bare bones of this project:

- Canvas.

- Websockets.

That's literally it. You'll need to know how to draw on a canvas, and how to send and receive WebSocket information. You can quite happily keep the current state of the canvas in an in-memory array, perhaps saving it to a file every few minutes in case the server crashes. Then, perhaps, when that's done you can swap our your in-memory array for a REDIS bitfield, improve the web sockets to no longer use JSON, but instead binary? Both of which should be only a few tens of lines of changes, but after that you'll be able to support tens of thousands of simultaneous users with hundreds, if not thousands of changes per second.

The thing with this project that's complex is the number of users required to use this at once, lessen the requirements a little and you'll come up with a simple project.


Excellent work actually completing the challenge set! If you were to do it again, would you still use the same technology used here? Kafka especially seemed to cause you a few issues.


Kafka wasn't a problem at all. Actually, I was shocked how well kafka just worked. It did everything I expected it to do, exactly as advertised in the documentation. Well, except for how annoying it was to get it installed and working on ubuntu. I will definitely be using kafka in future projects.

I think all my technology choices worked out fine. I dumped server-sent events halfway through in favour of websockets because WS support binary packets. But that was a pretty easy change affecting at most 50 lines of code.

I still wish we had an efficient (native) solution for broadcasting an event sourcing log out to 100k+ browser clients, with catchup ('subscribe from kafka offset X'). Nodejs is handling the load better than I expected it to, but a native code solution would be screaming fast. It should be relatively simple to implement, too. Just, unless my google-fu is failing me I don't think anyone has done it yet.


"an efficient (native) solution for broadcasting an event sourcing log out to 100k+ browser clients, with catchup"

Seems to me like you just described long-polling, which you dismissed in the article as "so 2005".


So for context I wrote a websocket-like TCP implementation on top of long polling a few years ago[1], before websockets were well supported in browsers. I'm quite aware of what long polling can and cannot do.

Yes, I did dismiss it out of hand in the article. The longer response is this:

In this instance long polling would require every request to be terminated at the origin server. I need to terminate at the origin server because every connection will start at a different version. The origin server in this case is running JS, and I don't want to send 100k messages from javascript every second. Performance is good enough, but barely. And with that many objects floating around the garbage collector starts causing mischief.

The logic for that endpoint is really simple - it just subscribes to a kafka topic from a client-requested offset and sends all messages to the client. It would be easy to write in native code, and it would perform great. After the per-client prefix, each message is just broadcast to all clients, so you could probably implement it using some really efficient zero-copy broadcast code.

The other approach is to bunch edits behind constantly-created URLs, and use long-hanging GETs to fetch them. I mentioned that in the blog post, but its not long-polling - there's no poll. Its just old-school long-hanging GETs. I think that would work, but it requires an HTTP request/response for each client, for each update. A native solution using websockets would be better. (From memory WS have a per-frame overhead of only about 4 bytes)

[1] https://github.com/josephg/node-browserchannel


Btw, there's a Kafka docker based install that's great for spinning up Kafka and testing quickly.

Nice work on all this!


I had always crazy problem in setting up Kafka until I discovered https://github.com/Landoop/fast-data-dev


Haven't there been several of these posted to Hacker News already? All of which look almost identical.

EDIT: Ah, apparently they're all links to this page.


There's also no ability to uninstall Edge, the current way of doing it appears to be to delete all of the binaries in a specific folder.


That's a fairly normal thing though, generally every operating system ensures there is a browser that can always be used for internal functions. Android has chrome (previously webview I think).


My moderating day generally goes as follows:

- Look at the mod queue, replying to any mod mail we've received and any posts that have been automatically removed by our AutoModerator script.

- Have a peak at the new posts, on any one day my sub-reddits only get a couple of hundred, so it takes a couple of minutes for a cursory glance.

- List all reported messages and see whether they're acceptable.

Then, because most of the sub-reddits I'm currently contributing to are about projects I'm working on, I generally answer any questions people have or problems they're facing.

However, this is just for me. It really truly depends on the type of sub-reddit you're looking to moderate as to what work you're required to do.


Calm down, personal insults aren't necessary. I have to admit I've had the same experience, don't know my phone number and don't carry it with me for the majority of the time and I've needed it twice so far. Once for a Twitter bot and once for a Telegram bot.

Never before have I been asked for a phone number that hasn't been optional from any other service (of which I have quite a few, LastPass counts about 2,200 of them).

(Of course, sometimes I do still take the optional road, two factor authentication is exceptionally nice).


Many services support otp authentication and don't require a phone number for 2fa anymore.

facebook, google, dropbox, evernote, github, gitlab, amazon, ...


The article ends by saying "By the theorem, Koomey's law has the potential to be valid for about 125 years". One imagines that it could continue long after even this however with further ways of computations being discovered.


Reversible computation (which that 125 year number refers to) has to be done very slowly. There might be some applications where that would still be relevant, but it's pretty clear this trend don't last even until the 2048 mark, let alone for another 125 years and beyond.

The Feynman lectures on computation have some good sections on reversible computation that I would recommend.


Even if you don't use the language specifically, the ideas it provides on creating queries applies to pretty much every database I've used, with the one exception being Redis.

It also doesn't hurt it's the most popular language at the moment and judging by the amount of applications using it, won't be disappearing anytime soon (especially not in just ten years).


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: