Just a note to the author if they are reading: you might want to remove that OSC code for inline HTML.
There are reasons dynamic codes like those don't exist. Good reasons. (In my own naivete, I once thought an OSC code to access scrollback would be a good idea. You can imagine how insecure that was). It's not a good idea to run any programs in this terminal until that OSC code is either removed or the HTML is somehow sanitized.
That OSC code might be the key to make this project really interesting and powerful. But you are right, to make this safe, some sanitization or other sandbox would be needed. This could be hard/impossible if there is JavaScript involved in the inline HTML.
To use this on a remote server without making it listen to the wild, keep it listening on localhost only and on your computer you could for instance launch this:
ssh -Nf -L 9999:localhost:57575 remote-server
and then you can go to http://localhost:9999/ with ssh taking care the encrypted connection between you and the remote-server.
not being funny, but isn't the purpose of a web-based terminal to help give you access to a terminal-like environment when you don't have access to a standard terminal / ssh?
A use case (warning I've not thought his out very deeply)
Abe has shell access via 'ssh' to Server. He wishes Bob and Carol to have access to the server but for whatever reason they don't have SSH.
Abe shells in, does the trick for Bob and Carol. Bob and Carol can use the web-based ssh and are happy campers.
Or.
Abe controls the server via Puppet [Chef, Ansible]. Something you can do with Puppet is disallow SSH access - one does this to remove the temptation to 'cheat' and hand-config the system. Abe has the ability with Puppet to exec this command to let Bob or Carol (or Abe) into the system 'in case of emergency'.
tty.js[1] works on windows (the backend is linked to the winpty library). I'd normally feel like I'm tooting my own horn by inserting this here, but it looks like the clientside part of Butterfly is a fork of my term.js[2]. I'm not sure if python's implementation of pty's handles some kind of workaround specifically for windows.
MSBUILD : error MSB3428: Could not load the Visual C++ component "VCBuild.exe".
To fix this, 1) install the .NET Framework 2.0 SDK, 2) install Microsoft Visua
l Studio 2005 or 3) add the location of the component to the system path if it
is installed elsewhere. [E:\dev\node\term.js\node_modules\socke
t.io\node_modules\socket.io-client\node_modules\ws\build\binding.sln]
MSBUILD : error MSB3428: Could not load the Visual C++ component "VCBuild.exe".
To fix this, 1) install the .NET Framework 2.0 SDK, 2) install Microsoft Visua
l Studio 2005 or 3) add the location of the component to the system path if it
is installed elsewhere. [E:\dev\node\term.js\node_modules\socke
t.io\node_modules\socket.io-client\node_modules\ws\build\binding.sln]
I don't quite understand the use of it. iPython Notebooks I get. They really give you something you don't get in a terminal widow, but I take my yakuake everyday before a web based terminal. It integrates better in the desktop environment, has global hotkeys, can be scripted via D-bus, doesn't need a server to run and has no potential security issues a web server based terminal has etc.
This is super-cool. Not sure it's recommendable to use it in place of ssh, but maybe with ssl+http-auth it could be workable? It definitely adds some convenience for when I'm away from one of my boxes and I don't want to deal with putty.
As per all HN comments, one downside is that it kind of messes up with my vim colors, but that may be that my vim colors are messed up to begin with.
If you are concerned about the security and don't want to deal with putty (I can commiserate), there's always the secure shell chrome/chromium extension. If you haven't seen it already you might want to give it a look.
Well right off the bat: You don't have to install anything. You also don't need to worry about updating a thousand clients if a security problem crops up--you just update the server and you're done.
Also, in terms of "why" there's all sorts of neat things you can do with a web browser that aren't so easy with regular terminals. For example, displaying inline images (if you code it right). Gate One can display images, PDFs, and play back sound files right there in your terminal if you do something as simple as 'cat somefile.png' or 'cat somefile.ogg'.
Another vote for Chrome Secure Shell. Open source, well made and hterm supports copying to your clipboard when tmux or mosh doesn't eat the sequence. I have limited font choice on the Chromebook so I embed a powerline font in mine. Very cool. Hacking up some insecure DIY server backend is a bit silly when you can have libssh or mosh compiled into nacl.
When I wrote my first web-based terminal (Escape From The Web) it was based on AjaxTerm. It kinda sucked for more than one terminal at a time so I went back to the drawing board and wrote Gate One. It turned out great (if I do say so myself!) and soon it will support running X11 applications as well (see: http://youtu.be/6zJ8TNcWTyo)
The security of this worries me. I would have to understand their code very well before I would be comfortable running a program that gives root access to anyone over a web browser.
> You can set the bind host with butterfly.server.py --host="0.0.0.0" which will allow other users to connect to your terminal. A password will be asked but IT IS NOT SECURE! So it's recommended as of know [sic] to run this only on local network for testing purposes.
That doesn't help if the vulnerability involves taking over your machine via JS on an untrusted page, causing your own browser to conduct the exploit against your terminal.
I really like the quick history selection feature, and now I want it in iterm2. Browser based terminal isn't something I'd be likely to use, but congrats on writing it.
This would be nice to get working in ChromeOS, would it be possible to start this at runtime? Which files should be edited? I've got a dual-boot Chromebook machine so I can mount the ChromeOS partitions with Linux.
Yes, there is: system packages should by installed with the system's package manager. Otherwise, you have two systems managing the same set of files, causing tears and suffering.
Packages not installed with the system's package manager should be installed elsewhere, e.g., a local install in a user's home directory, or (in the worst case) /usr/local or /opt.
You're assuming everybody follows the package manager religion. E.g., it would be fine doing this in OS X, and pip install will follow the best UNIXy practice.
Unless you intend to (eventually) break your OS, system-wide packages should be installed with system-wide package managers. That is, dpkg/apt, rpm/yum, pacman, emerge or similar.
Failure to comply with this rule bites hard when you install some other package that uses Python and pulls dependency from OS repository that you had already installed from PyPI. That obviously fails due to files being already present (and probably from another egg's version) and you're probably already using this egg somewhere so you probably can't readily replace it. Manual clean-up of such mess isn't pretty nor fun.
So, if you want a non-project-bound package (like ipython), do so with `pip install --user`. Otherwise, virtualenv is the tool.
(Obviously, there are non-trivial exceptional cases where sudo pip is fine. I believe they're quite rare.)
If you want a clear failure mode - if this installs requests globally, think about the chaos if you're on Debian stable, with 0.12.1, before the API break - after a pip install, you may end up with all the software on your system broken.
When this happens, please don't file bugs with Debian. Thanks.
> When this happens, please don't file bugs with Debian. Thanks.
We are all adults here. I believe the implications of using pip instead of apt on Debian are pretty clear for anyone using it. This is not an universal problem though.
> Everyone should absolutely be running from a virtualenv. Never touch the system python.
Why exactly? I find it convenient to have some packages installed system wide so that they can be used by quickly loading up a python shell without having to activate a virtualenv first or if they need to be used outside any specific project eg. requests, nose, jedi, pyflakes, sphinx etc.
For most systems, if you have a "system python" you'll want the system package manager to manage that python (and python packages). Because breaking python can mean breaking the package manager.
Personally I enjoy having a few bits installed under ~/opt/py-venv and simply add ~/opt/py-ven/bin to my path. It's usually no need to activate a venv to use it -- just call that venv/bin/{python|pip|hg|ipython|<whatever>}.
In other words, whenever I "pip install something" that something is installed in my "default" virtualenv. And if/when things get out of hand/I need to upgrade to a new python -- I can just recreate the virtualenv and install whatever is needed.
I have a default.env that is activated in my .profile so I can always just do a `pip install <package>` w/o having to touch the system python. The system python is for the system, not me.
Are virtualenvs (or Ruby bundles, or anything else equivalent) still relevant when using Docker containers? It seems to me that the proper thing to do, when you're making a container that runs a python app, is to install the app's dependencies to the container's global environment.
I keep all of my systems the same between dev,test,prod so I still deploy into a virtualenv inside of containers. No reason to ever run something as root.
Homebrew installs a specific pip, you still need to upgrade if you want to get the latest. Although it should be fairly up to date. Type `brew edit python` to peruse the recipe.
But how many people actually need system wide installs on OS X? That's only relevant if more than one uid needs access to the stuff.
(One might argue that in case of one uid per mac, doing a system wide install isn't a problem -- but I'd counter with adding a new user and testing if whatever break-age is introduced with various local package installs is easier/quicker than having to reinstall os x...).
I fail to see anything cool here. What's the point of using browser for UI layer if you need a backend app anyway?
Or, if you intend to use that remotely - Atwood's law aside - what's the point of replacing already well-working client apps, usually, with old trusty OpenSSH client under the hood, with some browser-based kludges? (For example, you'll have to reinvent auth.)
There are reasons dynamic codes like those don't exist. Good reasons. (In my own naivete, I once thought an OSC code to access scrollback would be a good idea. You can imagine how insecure that was). It's not a good idea to run any programs in this terminal until that OSC code is either removed or the HTML is somehow sanitized.