Perhaps we don't need to transport electrical current in the form of electricity but rather some kind of media, perhaps sodium metal could be a good choice. We could pump liquid sodium through pipelines or just transport it by ships.
Salt (NaCl) is abundant in oceans. If you react sodium with water in a fuel cell you get NaOH, which you can again electrolytically split into sodium metal, oxygen, and hydrogen. There are ways how to extract sodium metal more efficiently than using the 130 year old basic Castner process though and there are better approaches to the fuel cell than what Lockheed imagined originally in the 1970s.
The java.net.Inet4Address and Inet6Address could be more lightweight.
For a simple IPv4 address normally representable using 4 bytes/ 32 bits Java uses 56 bytes. The reason for it is Inet4Address object takes 24 B and the InetAddressHolder object takes another 32 B. The InetAddressHolder can contain not only the address but also the address family and original hostname that was possibly resolved to the address.
For an IPv6 address normally representable using 16 bytes/ 128 bits Java uses 120 bytes. An Inet6Address contains the InetAddressHolder inherited from InetAddress and adds an Inet6AddressHolder that has additional information such as the scope of the address and a byte array containing the actual address. This is an interesting approach especially when compared to the implementation of UUID, which uses two longs for storing the 128 bits of data.
Java's approach is causing 15x overhead for IPv4 and 7.5x overhead for IPv6 which seems excessive. What am I missing here? Can or should this be streamlined?
What a wonderfully HN response to a biographical piece on James Gosling.
For my part, most of the Java code that I have written that needs to use IP addresses needs somewhere between 1 and 10 of them, so I'd never notice this overhead. If you want to write, like, a BGP server in Java I guess you should write your own class for handling IP addresses.
This seems like a case of insufficient leverage and negotiation. This is of course easy to say after the problem is described. Freelancers and small companies need to be really good at negotiation or accept that they will have a higher risk of losing a lot of money, time and energy on somewhat unnecessary fuck-ups.
Most of the moving I have personally done was with people that someone knew and it worked out reasonably well. We have done most of the packing and these guys have mostly just done the loading/ unloading. The only other thing was taking apart and putting together the bed, table etc. This worked well enough even between Zurich and Prague and cost slightly over $1,000.
Yes and no. Statically typed languages only know that data stored in some piece of memory was conforming to some kind of shape/ interface when it was first stored there. That's why tricks like SIMD Within A Register (SWAR) work at all. E.g. when you need to parse temperatures from string input very fast like in the 1BRC:
https://questdb.com/blog/billion-row-challenge-step-by-step/
How does your type system help there?
With static typing, you are doing specification and optimization at the same time, which is maybe necessary because compilers and languages are not sufficiently smart but also because of this mix it complicates reasoning about correctness and performance. Also static typing introduces a whole universe of problems with itself. That's why we have reflection or stuff like memory inefficient IP address objects in Java:
For a simple IPv4 address normally representable using 4 bytes/ 32 bits Java uses 56 bytes. The reason for it is Inet4Address object takes 24 B and the InetAddressHolder object takes another 32 B. The InetAddressHolder can contain not only the address but also the address family and original hostname that was possibly resolved to the address.
For an IPv6 address normally representable using 16 bytes/ 128 bits Java uses 120 bytes. An Inet6Address contains the InetAddressHolder inherited from InetAddress and adds an Inet6AddressHolder that has additional information such as the scope of the address and a byte array containing the actual address. This is an interesting approach especially when compared to the implementation of UUID, which uses two longs for storing the 128 bits of data.
Java's approach is causing 15x overhead for IPv4 and 7.5x overhead for IPv6 which seems excessive. Is this just bad design or excessive faith in static typing combined with OOP?
The two things I ever enjoyed that is some kind of programming was building a pipeline in shell to process data and for the last almost 5 years writing Clojure and ClojureScript. We are now 4 guys writing Clojure with 30+ years of Clojure experience added together. I participated as a co-founder in a front-end heavy project in Clojure/ ClojureScript and more than a year ago started a Clojure-preferring consultancy in Prague, Czechia. For stuff like Inter-dealer broker trading system it's a no-brainer. For many other things as well. Even for distributed systems/ higher level infrastructure stuff it might be a good choice to get going at least.
The ZFS ARC cache tries to keep a balance between most frequently and least recently used data. Also, by default, ZFS ARC only fills out to a maximum of about half the available RAM. You can change that at runtime (by writing the size in bytes to
/sys/module/zfs/parameters/zfs_arc_max
or setting the module e.g. in
/etc/modporbe.d/zfs.conf
to something like this
options zfs zfs_arc_max=<size in bytes>
). But be careful, as the ZFS ARC does not play that nice with the OOM killer.
Now the common code base of upstream OpenZFS is the FreeBSD/ Linux code. Things have changed a lot with OpenZFS 2.0 AFAIK.
Yeah, server vendors are a bit crazy with small jet engine fans. A 4U chassis could easily house 80, 92 or even 120 mm fans which could spin much slower with a much higher air flow. That would of course also be much more efficient.
But L2ARC only helps read speed. The idea with dm-writecache is to improve write speed.
I started thinking about this when considering using a SAN for the disks, so that write speed was limited by the 10GbE network I had. A local NVMe could then absorb write bursts, maintaining performance.
That said, it's not something I'd want to use in production that's for sure.
There was some work being done on writeback caching for ZFS[1], sadly it seems to have remained closed-source.
That's what SLOG is for if the writes are synchronous or if you have many small files/ want to optimize the metadata speed look at the metadata special device, which can also store small files of configurable size.
ZFS of course has its limits too. But in my experience I feel much more confident (re)configuring it. You can tune the real world performance well enough especially if you can utilize some of the advanced features of ZFS like snapshots/ bookmarks + zfs-send/recv for backups. Because with LVM/ XFS you can certainly hack something together which will work pretty reliably too but with ZFS it's all integrated and well tested (because it is a common use case).
As I mentioned in my other[1] post, the SLOG isn't really a write-back cache. My workloads are mostly async, so SLOG wouldn't help unless I force sync=always which isn't great either.
I love ZFS overall, it's been rock solid for me in the almost 15 years I've used it. This is just that one area where I feel could do with some improvements.
Salt (NaCl) is abundant in oceans. If you react sodium with water in a fuel cell you get NaOH, which you can again electrolytically split into sodium metal, oxygen, and hydrogen. There are ways how to extract sodium metal more efficiently than using the 130 year old basic Castner process though and there are better approaches to the fuel cell than what Lockheed imagined originally in the 1970s.
reply