Doing GUI calls is a notorious way to slow down apps. I sped up an app (major software from major large corporation) by a factor of 10x a few years back by removing GUI updates. What you have to do is create a thread that refreshes the GUI periodically and not let your main worker thread ever work on the GUI. This particular app was an Eclipse Plugin (a source control app) and the Eclipse SWT to update the console log window was being called after every file operation (of which there were 10s of thousands). All I did was move the GUI refreshing into it's own thread, and call it only every 250millis or so. I was hailed a hero, and deservedly so, for speeding up our app by 10X and was one of my most satisfying accomplishments in my software career.
This strategy is also encouraged by best iOS development practices (main thread never works on GUI).
Edit: 'main' as in the one doing the one doing the work, in the same spirit of the original post. In truth, the true main thread that runs in main() is where all the iOS GUI work happens. Apologies for the confusing wording.
This is not only completely wrong, it's dangerously wrong. It is in fact wildly unsafe to touch the GUI from anything except the main thread. It's all the non-UI stuff that you should shove onto another thread.
In other languages / frameworks, "main thread" may have flexible semantics, but in the land of iOS, UIKit defines the term "main thread" to have one specific meaning.
Nope, the term "main thread" is very explicitly defined by both iOS and OS X. Not only in all the frameworks (e.g. NSThread, NSRunLoop, etc. all explicitly define "main thread"), but even at the system level. OS X and iOS have a pthread function `int pthread_main_np(void);` that returns non-zero if the current thread is the main thread (meaning the initial thread that the application was launched with), and that is the thread that all UIKit (iOS UI) and AppKit (OS X UI) code must be running on.
This is backward. Since UIKit is single threaded, the main thread must work on GUI: the best practice is that nothing except GUI work should happen there.
You're not being charitable enough to the comment... you two are in completely agreement with one another if you only changed the meaning of the word "main". It's clear the grandparent meant "the thread that's doing the work."
It's really difficult to get to that meaning, because "main thread" is what you call the thread where main() is invoked and which is where all the GUI calls must happen. You can't use a common term in context to mean something different from its common meaning and expect people to follow.
And yet it was perfectly obvious what was meant. If you just assume the person who made the comment is not stupid/rambling (difficult concept, I know...) it is not difficult to get the meaning at all.
You say it was perfectly obvious, I say it's almost impossible for someone steeped in iOS development to understand any way other than the way I said. Even given a huge amount of charity to the poster, the best I can do is "I have no idea what they mean by this."
How come we have to give this person the benefit of the doubt to the point of understanding a term to mean its exact opposite, but you don't have to give us any of that?
I think it's just a "my environment" bias thing going. I don't work on iOS and I immediately understood what was meant by the comment. Of course, the "my environment" bias may also include a degree of the "other" is not as bright as me.
Changing a fundamentally well-defined term (main thread, i.e. the one on which main() is called which is running the core event loop for the application process) to be an incorrect opposite is rather fundamental when one is explaining an iOS best practice.
Same was true of Java. Sun called it the "AWT event dispatch thread." Any process that does computation or blocks on IO should run in a separate thread and periodically post updates to the UI via an event queue.
When I gave the Nexus Mod Manager a try a little while ago, I wondered how it managed to take 18 hours to copy a few hundred files from one directory to another - on a SSD. Wouldn't be surprised if this was at least part of it.
Nexus Mod Manager is impressively bad. The interface is insanely laggy; select a mod, wait a second, list starts to refresh, wait a second, list finishes refreshing, wait a second, the mod info shows up. It's really easy to do things by mistake because of the lagginess (and probably just generally poor quality of programming). It has multithreaded downloads, but even these are messed up. Every time I tried them about 50% of the downloads got corrupted.
Doing GUI calls is a notorious way to slow down apps.
Seriously. One notable Smalltalker sped up the compiler in VisualWorks this way. Yes, there is a compiler in Smalltalk, it just usually only compiles one method at a time, which had a lot to do with why no one noticed that printing notifications to the transcript took up most of the time. So for a bulk code load, turning off the notification sped up "compile time" a whole lot. In fact, the JIT-ed Smalltalk in VisualWorks actually outperformed the C/YACC implemented compiler in a different Smalltalk.
Someone needs to keep track of this stuff. Why do we keep making the same mistakes over the decades? (Insert famous Alan Kay quote here.)
Years ago had a similar albeit smaller discovery. Loading a flash application with a progress bar and setting the quality option to the lowest quality made the bar go faster.
I remember having a few tricks like that–I vaguely remember moving bits of the GUI for some apps off screen so they wouldn't get drawn and thus run faster.
Even more important, npm doesn't reliably reproduce builds. Different runs of the same shrinkwrapped build could either fail or succeed -- which is odd given that builds should be deterministic. Additionally, shrinkwrap is also broken in the same way.
This is the first time I've heard of ied, a little competition could go a long way!
Could you expand on that? What causes it to be nondeterministic? I haven't had that experience but it's the second time I've read someone reference this behavior
> You can reliably get the same dependency tree by removing your node_modules directory and running npm install whenever you make a change to your package.json.
I have a project that's shrinkwrapped. I check out a new copy of the project and run `npm install`. My co-worker does the exact same thing, his fails, yet mine successfully builds. An install of a shrink-wrapped should be deterinistic, i.e. either succeed or fail, but not both.
From that doc page it sounds like that should be deterministic? Even without shrinkwrapping, a fresh install from package.json with empty node_modules should be deterministic.
> The npm install command, when used exclusively to install packages from a package.json, will always produce the same tree. This is because install order from a package.json is always alphabetical. Same install order means that you will get the same tree.
> You can reliably get the same dependency tree by removing your node_modules directory and running npm install whenever you make a change to your package.json.
Maybe you're experiencing a bug, rather than some in-grained non-determinism in npm?
My understanding is that even when you fix your dependencies to exact versions, your dependencies probably haven't so without shrink-wrap you'll never know _exactly_ what gets installed.
Indeed; a lockfile is just a workaround for the problem of "just give me whatever" dependency declarations. Solve the problem at the root rather than piling hacks on it.
I'm not 100% sure from your comment, but are you suggesting that by default you should vet every version exactly and freeze it at the point you defined it?
As someone writing a lot of ruby, "give me whatever" is analogous to "give me the latest unless I specify otherwise", which I consider to be a very good default. It keeps me up to date with security issues, and incompatibilities between libraries that the respective maintainers resolve amongst themselves with a minimum of manual work.
> are you suggesting that by default you should vet every version exactly and freeze it at the point you defined it?
Yes, this is the way that Guix and Leiningen and rebar3 and a bunch of other things work, and it is wonderful.
Pulling in new code without you asking is a fine idea for something like apt-get where you have a huge team doing QA on the entire system working together before it even hits your repositories, but for most package managers, the dev team is the one doing the testing, and upgrades should be done only with great care.
It does mean you have to watch for security updates, but this is true of all package managers.
> Yes, this is the way that Guix and Leiningen and rebar3 and a bunch of other things work, and it is wonderful.
Even in Nix/Guix, it's still ideal for upstream projects to express their dependencies in terms of ranges (semver-wise), otherwise we run into the problem have either really large run-time dependency closures, or problems around e.g. wanting to use multiple (overly specified) versions of C libs within the same process.
As the current maintainer of Nixpkgs' Bundler-based build infrastructure, I've found the lockfile approach that Bundler uses to be quite frustrating - in part because Bundler's design is antithetical to packaging, but also due to the build times and sizes of the resulting packages, compared to C libraries. (People give C a hard time wrt productivity and security and such, but when it comes to packaging, C libs are usually so much easier to work with than most other higher level languages.)
I would love to see more adoption of semver, and possibly Haskell's PVP (https://wiki.haskell.org/Package_versioning_policy). Granted, dynamic programming languages don't have the benefit of making API breakage obvious at build time, so perhaps the best we can do in such cases -- if we want any certainty that packaged applications will actually function correctly -- is lock down every dependency version precisely per application...
If you want to do this in ruby you can just specify a version manually. If you don't, you still get versions frozen by default with Gemfile.lock. It doesn't pull in anything automatically—by default you get no updates, you can choose to update a single dependency, or the whole thing if you want to verify it works on the latest (useful for libraries for instance). I'm not sure I see the downside.
FWIW I have been doing ruby for over a decade now, and I hold up Bundler as one of the great success stories of open source, and it is one of the reasons I hold Yehuda Katz in high regard, in that he was able to solve a really big problem in the community and hammer it into shape aggressively over a period of two years with a lot of doubters and naysayers (even Rubygems core was against Bundler for a long time), until it finally got so solid for so many use cases (libraries vs apps, private vs public, development vs deployment, etc, etc) where it solved nearly everyone's problems in such a solid way that everyone adopted it.
> I hold up Bundler as one of the great success stories of open source
It's a great success in many ways, but the problem it solves is completely self-inflicted by rubygems. I've also been doing Ruby for over a decade, but I've also learned a lot from other library ecosystems, and I feel pretty confident saying that disallowing version ranges makes all those headaches completely evaporate.
I'm with you that version ranges cause problems, but my belief is that lockfiles are a better solution.
Generally I'd use semver ranges in libraries, and then fixed versions + lockfiles for transitive deps in applications.
I suppose this is roughly equivalent to doing `:pedantic :abort` in leiningen, except you wont't have as many warning to squash - either way you have to rely on the test suite to tell you if the versions you've pegged work.
We use Ruby extensively in testing and infrastructure and have lost months of cumulative developer time to this attitude. It works for a single user constantly making changes to a small piece of code and prepared to work out the solution to dependency change problems as soon as they come up. However, it doesn't scale to a team of people working on very large code bases.
One example of this is having a repeatable developer setup guide. If the dependencies might have changed by the time a new joiner starts your setup guide could very easily end up useless or misleading and that's without anything at all in your own codebase having changed.
Shrinkwrap fixes this, but always gets added after things went wrong several times already. It should be the default.
> Mix, Erlang's package manager (used by Phoenix / Elixir)
Slight nitpick: Mix is part of Elixir, I've never seen a (non-elixir) erlang project that use it. Don't think erlang has an equivalent officially blessed tool, but the most popular one is rebar / rebar3.
(While I'm nitpicking: mix and rebar are more build & dependency managers; mix uses hex for package management. I believe rebar3 can also use hex. In ruby terms, mix ~ bundler (among other things); hex ~ rubygems).
This is a good start, but it has issues. You probably don't want all of your locally installed packages in a requirements.txt file. Instead, they should be curated. I've been using pip-compile [0] for a while now (see the author's blog post [1] for a detailed explanation) and have become a big fan of it. With this model, you should enumerate only what your application uses directly and let pip-compile convert this list to a full version-locked requirements.txt. (Shameless plug: I also wrote a bit about why this is the best currently available option for specifying Python dependencies [2].)
This should be used with virtualenv, not as an alternative to it. I have a bunch of libraries and tools in almost all of my virtual environments that are not application dependencies and have no business being installed in my production environment (e.g. bpython, tox, flake8, and neovim). This approach also handles things like a dependency being dropped gracefully (if some library no longer needs a dependency, it magically gets removed from your locked requirements.txt next time you compile requirements.in). Python's package management tools are making great strides (see pip's recent peep-style hash checking support), and pip-compile is a big win in that category in my opinion.
And it's so awesome when you discover devs have used lockfiles to leave you stuck on known-insecure gems instead of updating their fucking code to support the newer patched version.
Which is still better than having your server crash due to a bug in a dependency of a dependency that got updated without your knowledge when you deployed something trivial. It has happened to be more than once.
same issue here.
this non-deterministic behavior is really fxxked up. there's one time that our building process suddenly begin to fail, spend a few hours on the issue, and it turned out to be one of the babel-core patch release is broken.
some of the very fundamental designs of npm is seriously wrong.
Speed and support for multiple module development that have cross dependencies. Npm link sucks and updating and publishing downstream dependencies is a chore I wouldn't wish on my worst enemy.
Normally, I am not a fan of replacing words in the search engine with computer science stuff (think AJAX), but I can accept computer words replacing words that symbolize violence.
Your results, even when logged out, are heavily biased towards your previous query genres. For instance, I'm not logged in, but am at home with an IP address that hasn't changed in over a year despite being leased. When I search Google for the term "Ruby", the entire first page is for the Ruby programing language.
My father works with jewelry. Last week while visiting him, I did in fact search for the term "Ruby" on his computer in his normal browser, and even though he was also logged out, I wasn't able to find one result dealing with the programming language on the first two pages.
Google understands that Ruby can mean different things to different people, although I don't think it's gotten there with "ied" yet (and rightfully so, it's probably not searched for very often, except in the violent sense)
I guess we're supposed to assume that V8 is smart enough to inline that sort of function definition, but it seems deeply weird to me that drawBar() is defined inside the "hot" method like that. Also, I would have expected that streams would have been used for this. Logging is a very common "simple example" on node streams how-to pages.
I'd also like to take the time to mention that the caching system when installing / other actions is extremely inefficient (for my times I have the progress bar turned off already and this is a project with around 80 modules shared between dev / prod):
I started on a very (I would like to emphasize very a thousand times over) basic proof-of-concept to show how much faster it could be in the order of magnitudes:
All this does is build a json of every package you currently have installed, and utilizes that as a lookup store the next time instead of rebuilding it every install; this was targeted towards installing / uninstalling existing packages. Not fresh installs.
Fresh installs would benefit from bulk lookups via the API imo.
Looks good. I wonder if it'd be too much scope creep to add extension points for LAN/proxy caching. I know of one small dev team who had a single ADSL connection shared between ~10 people - NPM downloads would have been painful.
I think bug tracker comment forms for open source projects need some kind of question first (at least from people who aren't active contributors), like "Please select your type of comment":
* Nontechnical: I would like to express my eagerness that the bug be fixed or request its priority be increased, but don't have any additional technical information to add
* Technical: I have useful technical information to add, beyond what is already in the bug
Then "Nontechnical" type comments would be hidden from the developers by default (unless they specifically ask to see them), but their count could be used as a bug prioritisation mechanism (effectively leaving a "Nontechnical" type comment would be counted the same as a vote)
I guess it's GitHub's Eternal September. I'm all for more people having access to development tools, but GitHub really has to step up the tools available for managing issues/comments.
I posted this below, but I'll post again here for visibility. What the heck is up with npm install on iTerm 2? I had no idea something was weird until recently, when I did an install with Termial.
I had the same problem (and figured it was normal for a while), after some looking-into it I was able to fix it by unchecking "Treat ambiguous-width characters as double width" [0]
The funky progress bar is about the coolest thing with npm. They could let it take a minute if they wanted too and i would keep watching it with amazement.
you mean the progress bar that takes up way too much screen real estate to the extent that it completely cuts off any useful information from the install process?
Yes, that one. I agree with you, by the way. I don't have a terminal in front of me, but the progress bar seems to leave something like 10 characters of less of rapidly-changing status updates in exchange for the vast majority of horizontal real estate. I bet if they had split the bar onto a separate line from the status, it would run faster
Oh, that's fine. You can just download it from one of those mirror sites that host their own copies of the files. You can trust that it's genuine because of that big green checkmark on the page.
This makes sense. It's like when I'm solving a project euler problem, I don't use verbose output because it runs much faster without having to print everything to the screen. (Python)
That's exactly my experience. Once I wrote some naive PE solution in nodejs and added some logging at each step. After few minutes and far from a solution I stopped it and disabled logging. I got the solution within seconds. The diff was few orders of magnitude, which was pretty surprising for me.
Sometimes writing to the terminal can be the slowest part because it's unbuffered by default in an interactive prompt, so even just redirecting stdout to /dev/null (or even a file) can be faster. Had that happen some ten years ago, so my explanation could be out of date or misremembered but the effect was real.
Your explanation is correct. That is the semantics of Unix stdio buffering as implemented by glibc. So across a wide variety of languages, you'll see a significant performance difference if you are doing lots of writes to the terminal.
On the other hand if you didn't do this, then interactive terminal programs would be entirely unusable.
I don't know the exact cause here, but I can say I'm thankful for my systems background that I've gleamed for years before moving into programming.
Most sysadmin types know that displaying output on a filecopy slows down transfers greatly because the screen vsync holds up the operation.
When I write utilities today that do similar things, I only display output when writing and testing the program. Otherwise I'll write to memory then dump the results in a logfile at the end.
In this case since there is probably a less intensive way of providing status updates, and can probably be resolved by doing intermittent checks.
Someone with more UI development can comment here , but i always thought progress bars where more like a signal to the user that "something is being done" , and not a representation of how much is done and how much is left.
The whole point of a progressbar is to show work progress (as a fraction of total work). The issue is it's often difficult to map "work done" to a linear percentage, so either you do it on the cheap and your progressbar is just a souped-up spinner with no relation to wall-clock work time or you do lots of extra work to get more precise estimates but you end up taking more time overall.
It uses a very different dependency resolution algorithm, the biggest difference as I understand it is it will install dependencies maximally flat whereas npm 2 was much more nested.
I believe the reason it is slower is it can't do as much parallelism as npm 2. Also, npm 2 was susceptible to a lot of race conditions and non-determinism, especially with compiled dependencies.
...has he controlled for network speed? Unless he's set up a local NPM caching proxy, he's going over the 'net for these packages. It'll take more than 1 datum before I believe this hype.
Many people have confirmed a major speed improvement. I've repeated this with an empty cache, primed cache and in different orders many times and have always got the same results. Having said that, a good number of people have reported seeing no speed improvement too.
Upon first seeing npm 3.x I immediately added `--progress false --color false` to my npm installs and never looked back. Color and terminal graphics are the work of the devil.
Whether you claimed it is not in question. You're saying "[color in the terminal] is the work of the devil", which has a clear intent to dissuade people from coloring terminal output. Which, frankly, is insanity. Most apps do it well nowadays: --color=auto/always/never, usually defaulting to auto.
So when GP says "not everyone is you", they mean be considerate. If you want color off, you're being catered for. But don't try and push that choice on everybody who isn't you.
No, you're making bad jokes in a context where they aren't welcome. You may want to try Reddit or Slashdot. They enjoy this sort of thing, I understand.
As an undergrad, I had a 14.4 kbps modem, and an amber WYSE terminal. With that, I was able to do contract work. It reminded me of the amber monitor I had on my Apple II ten years before. A solid phosphor CRT with no shadow mask is a fine thing.
Colors on a modern LCD are great too; give it a chance!
I contracted at Nortel in the mid 1990's. In my cube I somehow acquired a Sun Sparcstation 20 that nobody was using, and found an NCD black-and-white (or was that one grayscale?) X terminal on a shelf somewhere, collecting dust. I found an image for it somewhere, and set it up to TFTP boot from the SS20 (which was running Red Hat Linux) to a login prompt (XDM). It was a nice little setup.
A couple of years ago I was just about to throw out all my O'Reilly X Window/Motif books from 20 years ago and when I landed a contract to update a K&R C based Motif system running on 32-bit Solaris connected to - of course - Sybase. It felt like I travelled back in time.
I have X/Window/Motif O'Reilly's from more than 20 years ago which are still shrink wrapped together. I've just been carrying them from place to place.
At the risk of getting downvoted myself, I wonder why all of your comments are getting downvoted. Is it such a terrible opinion that it doesn't even belong on HN? A lot of people turn off syntax highlighting. It can be legitimately distracting, and it messes up tooling.
I was asked a question and provided an answer. So I don't care for syntax highlighting or colors in my editor and command line - big deal. I am not imposing my view on others.
When working with a command line one wants npm or any command to run as quickly as possible. Any graphics that slow operation of the command should be an opt-in, not an opt-out.
Everyone is entitled to their opinion, but some people prefer to suppress others' opinion.
Making vehement categorical statements about matters of taste is an old flamewar thing on programming message boards. I'm guessing the downvoters don't want that trope on HN. If so, I have to agree with them, because such discussion is basically never substantive.
Don't even get me started on the number of reasons why nodejs should not be running on servers, I'll be here all day. We're currently deploying a NodeJS app and we'll never do it again - the amount of problems with that ecosystem is quite amazing. As I tweeted earlier today "JS ppl tend to have marketing & UI/UX experience so JS crapcode gets popularised, becomes widespread, then we have to deal with the fallout".
I'm genuinely interested. I've been using nodejs for hobby projects and scripts, so haven't seen too many issues. But I'd like to know how things are and what issues you've been seeing.
Howdy, we've had lots of issues with single threaded performance, poor memory management especially around garbage collection, we had to modify our SELinux policies as it likes to execute memory off the stack, npm has been very unreliable and often unreasonably slow, error handling of the node index app server seems to have some interesting behaviour but I haven't dug into that one too deeply yet and the list goes on. As an experienced systems engineer it's very clear that this is browser technology that was never designed not fit to run on servers.