Our app's build time ballooned by 2-3X or more when we went to multidex. We've had some luck reducing it, but we'd still like to axe multidex.
I definitely consider a 3X build time increase (not 30 seconds to 90 seconds, but 1 or 2 min to 5-8) to be a significant problem, bordering on nightmare... especially since we just included a few more libraries that put us past the 65k mark.
Me? I work on embedded android installations that have over a thousand years combined of running with 0 software crashes. Once I get to a certain point in developing an app for our installations I start profiling for the slightest memory leaks by watching changes in memory usage profiles for different user flows.
And I can only do that with release builds if I want useful numbers. I've had us split up apps over build times because of Multidex (for example, most of our AWS centric code lives in a separate app that exposes specific functions via an AIDL interface because the SDK was bringing in too many methods)
It sounds like a ridiculous number of methods until you realize just importing Guava brings in 20k methods, using half the Amazon SDKs would probably bring another 20k-40k, Apache's main utils bring about 10-20k, etc.
I mean, it means you can't pull in Scala. I mean, you can, but pull in enough bloated Java 3rd party libraries and you might be hitting that limit pretty quickly. Google broke Google Play Services out into a large number of small dependencies once they were past the 15k method mark. Mind you, certain compilation settings will totally strip out unused methods, but it's still annoying.
Java's crippled generics, lack of tuple support, and lack of default parameters encourage interface bloat in utility libraries.
Given how bloody good Java IDEs are (By which I mean to say - how good IntelliJ is), the cognitive cost of this interface bloat is very low... Until you start developing for Android.
On the one hand yes, on the other hand Java also generates Bridge methods when you implement a generic interface. A MyType implementing Comparable<MyType> will contain a compareTo( MyType ) method and a generated compareTo( Comparable ) bridge method.
It is more likely that generated classes, countless getters and setters and a tendency to small methods has a higher impact.
For reference types probably yes, but Java also often needs extra methods or overloads for handling primitive types, which would get unnecessarily boxed when used in generic methods.
I don't know, but I would doubt it. Making such a huge product decision based on an engineering limitation (that has a workaround) seems like a poor idea.
Because of "single responsibility principle" it's better if classes and methods do one thing. That usually leads to lots of small classes and small methods.
I find it extremely awkward that we actually use numbers for ports. Port 80 is typically used for HTTP, but there's nothing preventing another application from using port 80. Why not call the port "HTTP" instead. Better yet why not give it an integer range of ports to go along with the naming - Eg MyApp[1], MyApp[2].
If you want to hide a port from being probed then give it a GUID as a name. No more port scans.
At the time TCP/UDP were invented, keeping packet size small was very important. A string or GUID as addressing information would have been out of the question.
(If there wasn't a need to distinguish multiple clients on the same machine, we might have only had 255 ports!)
A standard port for HTTP servers is needed, as most HTTP clients don't support DNS SRV records.
That said, in the IPv6 world there's no technical reason you can't just let every service bind a different IP address.
> in the IPv6 world there's no technical reason you can't just let every service bind a different IP address.
I'm not a networking expert, but is that true that you can create multiple ip's for a single device using ipv6? Sounds like it would end up very messy.
You can do that, provided that the network routes multiple IPs (or a whole prefix) to the device. It tends to be pretty messy keeping track of all the addresses in the OS configuration, but that's an opportunity for OSes to get better.
It's not even convenient. Strings are one of the biggest scourges of any programming language out there. Giant mess in basically every language until Java got it somewhat right, and even then don't forget your equals...
C# (.NET) could be the forerunner if only they had not gone with UTF16, but then it's only with hindsight that we can point out UTF8.
Strings are only a problem because they are being treated as text. In this case, they would simply be treated as binary identifiers, so there is no problem. The raw bytes would do just as well for this kind of thing.
Portmap[1] lets you use names instead of port numbers. A server just grabs an arbitrary port number and then registers itself with the portmap server. When you want to connect from a client, you first ask the portmap server for the port number.
>For some reason, SRV records are not in wide spread use.
No legacy tooling supports it. Poor support on the products already there. Huge numbers of firewalls are hard coded to 80:443 so alternate ports would be blocked.
When you make a poor choice on the internet, it lasts forever.
More practically, it's very nearly become a moot point due to firewalls which block all traffic except to a very narrow set of ports. Opening holes on enterprise firewalls is a gargantuan battle. Which is why we've seen what are effectively new services implemented as port 80 features: Twitter, Reddit, Facebook, Gmail, Dropbox just off the top of my head (irc, Usenet, finger, email, ftp, arguably, in their basic implementations).
Which is why when Apple moved to TCP/IP, multicast DNS (Bonjour, zeroconf) was such a crucial part, because it let them implement a similar model on top of port numbers.
The hard links issue often reared its head on Linux, because a sub-directory has a hard-link to its parent directory called "..". So you could only mkdir 32767 directories in any given directory, which hits you fairly fast if you try to do something like sorting numbered files into "000000/00.txt" etc
After yesterdays posting of the list of incorrect assumptions programmers make about the world I put it into my daily web routine to visit the page and read one list per day till done.
Why makes you say it is "premature"? Isn't it possible that they hit that limit because the project is a monolithic monster? Or because it uses libraries that waste those resources?
i'd argue the opposite.. i mean, i think you're referring to malloc when you cite premature optimisation, but this seems more of an issue of failing to optimise for the future or generalise beyond the initial needs of the developer
You're succumbing to selection bias[1]. You never notices all of the 16-bit counts in all of the software you use that don't overflow. For all we know, there could be a thousand of them for every case where 16 bits is too low.
Assuming you managed to accumulate enough items to fill a 64-bit counter, if you got 1 core of your super-fast PC to increment a 64-bit value for each one, it would take over 100 years to count them all. (Assuming you don't just decide to pop -1 in the counter. Presumably you'd want to make sure you haven't miscounted during the accumulation phase.)
Well, for counts 16 bits is enough, unless you count something really small, like every individual byte in something.
And for counts 32 bits should be enough, since it's 4 billion.
Sure -- but it is an exceptional amount of memory to use for tiny objects like those. If you're working with four billion objects at a time, they're probably more substantial than eight bytes.
Or you spent a lot of effort to get them to 8 bytes or even smaller, to fit as many of them as possible in memory. See use-cases like time-series/analytics databases, point-clouds, simulations with many elements...
It is. But when you have one interface, that interface can be given the number 0. So you have a total of 1 interface, even though you've only counted up to 0.
Edit: I misunderstood. I'll leave the comment up. But I originally interpreted the story as meaning that each interface needed to be assigned an unsigned 16-bit id, which allows for a total of 65536. That was just inference on my part though. It literally says that more than 65535 are not allowed.
Quantity as in great amount? Either way, sounds like the worst kind of premature optimization to save one bit's worth to lose the ability to ever have the quantity be zero.
This sounds familiar. Mono worked for me until it didn't. And it would be some corner case like timing and sockes, or HTTP connection issues. By that time we already bought into it and so I was running around opening a huge black box trying to figure out the internals.
It's almost as if programming languages should support integers, rather than just machine words. Or, put differently: Fixed-Width "Integers" Usually Harmful.
> A short fact about .NET: if you have an interface IFoo<T>, the runtime generates a separate method table per each IFoo<int>, IFoo<double>, IFoo<bool>, and so on.