cockpit-machines is available in a recent version in debian backports, installing it is trivial, no configuration, https://hostname:9090/ and just works.
RedHat announced that cockpit will be the long term successor of virt-manager:
The main issue with virt-manager is that it's a desktop application and you can't really collaborate with others when managing infrastructure.
Cockpit solves this issue. The feature set in slightly different but mainly it is limited as to what you can manage.
When running different types of infrastructure at the same time, e.g. KVM + AWS + Azure + ... it won't help much. In such cases it would make sense to check out Mist (https://github.com/mistio/mist-ce), which does something similar to Cockpit but for ~20 infra techs.
Virt-manager can connect simultaneously from many different terminals to many different libvirt servers. What do you mean by "you can't collaborate with others"?
I mean that you can't really have multiple people, from multiple teams, accessing the same infra with different types of rights and with centrally managed authentication/authorization/logging.
That's exactly what libvirt lets you do. It uses policykit to handle the authorization, cockpit doesn't change that and still requires you to use policykit to control who has access to what.
Can you elaborate on this? Does cockpit include a handful of services/sockets that must be activated in sequence? Or does it rely on other parts of the systemd constellation
If you like this, you should really check out oVirt. It's the open upstream of RHV. It should look very familiar to you.
Red Hat obscure it a bit, presumably because there's very little difference from RHV (and might detract from sales), so the website is not so shiny. But play with it.
Red Hat really has gone all in on cockpit as well. It is very polished and pretty full featured, and continues to improve. Also very easy to setup on RHEL installations. When you login the MOTD actually has instructions on how to enable cockpit, that's how hard they're pushing it.
I gave it a shot, and it was able to see my running libvirt VM's although it could not create one due to an obscure error of not supporting the "custom" CPU type. The running VM's are running in full virtualization so it is probably just a minor bug in Cockpit, seeing that it is after all just a libvirtd frontend in this regard. Aside from that, Cockpit looks fairly stable and it has potential, but keep in mind it is very simple.
For my homelab I use proxmox on a couple of machines and it works great for managing containers (in terms of LXC containers that would be more of a traditional VM) and it works great. Most people/companies don't need the complexities that come with Kubernetes or other tools like that.
This is the Cockpit user interface for podman containers.
It is being actively developed and has not yet reached feature parity with cockpit-docker. For now you can do basic image and container tasks for both system and user containers.
If you haven't tried it, the "Virtual Machine Manager" GUI [1] is also surprisingly usable on Desktop Linux. I use it with both Windows and Linux images, and getting it to work with the usual guest extension niceties is straightforward.
From what I remember, getting VirtualBox or VMware to launch and run properly after a few months of automatic upgrades and not launching your images for a while was always kind of a gamble. With libvirt, everything just seems to work, and your images are just ready to go when you need them.
Libvirt is also responsible for keeping the guest exactly the same after upgrades; a basic QEMU command line does not guarantee that the guest hardware remains the same when you upgrade to a newer version, while Libvirt uses the more complicated and less human-friendly options to ensure that.
Libvirt does a lot more for QEMU than for other hypervisors, so much that libvirtd's initial name was qemud.
For simple usage, GNOME Boxes is really nice. It'll automatically download the ISOs of basically any Linux distro when you click on their logo, automatically go through the installer, and install copy/paste, dynamic resizing of the VM screen when you resize the window (I don't believe virt-manager supports that), and drag/drop support. It uses libvirt behind the scenes, so you can use virt-manager with VMs created by Boxes and vice versa.
I don't think I ever tried drag and drop, but I've gotten copy/paste and window->display resizing to work in virt-manager after manually setting up the kvm/qemu guest tools.
While I respect the job that libvirt does (it works — high praise for software), it’s unfortunate that it is also the answer to the question “how can I represent all these virtualized things using XML?”, which was in fashion when libvirt was created.
It’s also a bit misleading to characterize cloud providers as building on libvirt. Libvirt is useful as an mostly hypervisor-agnostic wrapper, which is super useful for enterprise on-prem software, but kinda of the opposite of what big providers need and build for themselves.
I wonder what we will look back on as the XML of today. Everything is schemaless JSON and YAML; surely we’ll look back and wonder WTF everyone was thinking? But alas, it’s probably not a data format at all. Only time will tell.
JSON is a bad configuration format simply for the fact that it doesn't support comments. Some parsers do but most don't. XML for all its verbosity and complexity at least has comments where I can quickly try out configuration changes without needing to save the old configuration somewhere else.
Hell, even INI files had support for comments and were just as expressional as JSON. I wonder why we regressed in that regard. Was it just because of the success of JavaScript and readily available JSON parsers? Because I'd argue that an INI parser is just as easy to write.
This is exactly why TOML [0] is gaining traction as a simple configuration file format. Rust's cargo has been TOML from day one (`Cargo.toml`) and Python is moving this way with (`pyproject.toml`).
For more general data structures, remember that JSON is a true subset of YAML [1]: Switch to a YAML parser and you can start optionally adding comments to your files while still being compatible with legacy input.
Yaml is generally derided as too complex and transforming data in unintended ways.
I see a lot of Rust programmers preferring RON over TOML because it is much less complex and doesn't have multiple ways to express the same thing https://crates.io/crates/ron
I hate YAML. XML and JSON have syntaxes that are very easy to learn. You basically see one example and know 99% of what you need. YAML has multiple ways of doing the same thing (eg. two array syntaxes) and is generally too "clean" looking to the point of being unreadable.
Because JSON was designed explicitly not to be a config format but a serialization format. For configs, use JSON5. At leatst imo seems most sane config formats are json5 these days.
> JSON is a bad configuration format simply for the fact that it doesn't support comments. Some parsers do but most don't. XML for all its verbosity and complexity at least has comments where I can quickly try out configuration changes without needing to save the old configuration somewhere else.
Do this with libvirt please and report back how it went :)
Libvirt uses proper XML parser so that's almost certainly a non-issue. The problem with libvirt XML files is that they are snapshots, and that they lack obvious defaults.
What I mean with "snapshots" is that you can't edit an XML file for a running VM on disk, but instead have to go through either virt-manager or virsh dump/load function. If you do edit an existing XML file on disk, it will just overwrite it for you.
And XML format is a bit more verbose simply because XML schema writers make it so. SGML could be even nicer for human writers.
As for comments in JSON, many parsers are not strict and would let you insert arbitrary elements, so you might add {"comment":"whatevah"} where you need it.
I actually _do_ do this quite often.
Often w.r.t local changes (attaching network devices or storage devices).
If you fire up `virsh edit <resource>` you get a live view of the resource which can be updated in place. This is great to comment out some things and uncomment things for quick and dirty modifications. (some require a VM restart though).
That has to be a recent change, because editing the XML used to cause it to roundtrip through libvirt, which doesn't keep the XML DOM around; anything not in the internal C structs would be removed from the files, including all comments.
Weird, I've done it quite recently on a Debian system and it reformatted & removed everything it doesn't know from the XML. If you Google this, it's a common issue and apparently the reason why the top-level "metadata" tag was added, because that metadata-tag is carried around as DOM, so doesn't get purged.
I load config files as JS (not JSON). And only use JSON for data serialization. In plain JS you can have comments and dont have to put quotes around properties. Not saying everyone should have a full JS parser for their config files, but its really nice.
Haha yeah JSON is a terrible config format. As another poster mentioned, no comments. But the other big problem is inconsistent serialization. Null vs missing props, conversion of "true" to true. I've lost track of the number of times I've had to workaround differences in the serializer vs deserializer on other end.
It's really quite bad format for how ubiquitous it's become.
> It’s also a bit misleading to characterize cloud providers as building on libvirt
Libvirt provides lifetime management for KVM virtual machines, including orchestration of live migration, setting up SELinux to ensure isolation between QEMU processes and cgroups to limit resource utilization, creating network interfaces and bridging them to host networking, and more. Any cloud provider that uses Openstack+KVM relies on Libvirt for all these tasks.
Sure, but Openstack-based public clouds are in short supply (unfortunately? fortunately?). I think the short explanation is that pure OSS stacks limit providers ability to move fast and differentiate. So it’s a race the bottom for Openstack-based providers. (Rackspace’s gambit was quite different, as they were arguably trying to do exactly that.. comoditize cloud services.)
No the short explanation is that no one is able to put in their cloud the same amount of money that Amazon, Microsoft and to a lesser extent Google do.
... that’s a laugh of bitter jealousy. I’ve dealt with XML, but I’ve never with XML that came with a schema, or which would have reliably followed one. Not saying schema-less formats are great, but at least I can eyeball them to see what is going on.
XML turned everyone into a language designer during an era where we already knew that language design was a rare skill.
If I saw a schema, which I often didn’t, it usually didn’t say what the author thought it said. To a first order approximation, all the good ones I saw came from one tool (XMLSpy possibly?)
Namespaces ended up in a sort of uncanny valley that I can’t quite do justice to.
how about docbook?
It's been a while, but I think even eclipse's project.xml used to have a schema, and would validate it if you tried modifying it yourself.
> Libvirt is useful as an mostly hypervisor-agnostic wrapper
Not sure if you're claiming this or not, but it's worth clarifying:
Libvirt is a hypervisor-agnostic transport library; but it is not a hypervisor abstraction library.
That is, you can use libvirt to talk to KVM or VMWare or Xen or Hyper-V. But you cannot, in general, take a VM config from one hypervisor and use it on another hypervisor: there are too many details of the underlying hypervisor exposed to make this possible. And if you build a tool on top of libvirt for one hypervisor, you can't just flip a switch and have it work on another hypervisor -- all of the code that generates your XML configs for (say) KVM will need to be rewritten if you want to use Xen or VMWare or Hyper-V.
As an example, in the config you don't really say, "Give me a disk, and here's the disk image". You say, "Give me a virtio disk of this particular version with these particular properties." If your hypervisor doesn't provide virtio, the image simply can't be created. Which means the tool you're writing on top of libvirt needs to know the appropriate PV disk type for each different hypervisor and use the appropriate one.
(At least, this was the situation several years ago, when a team from oVirt came to a Xen hackathon to see if they could get oVirt working on Xen. It turned out to be more work than they thought.)
It is not at all misleading—it is in fact correct to state that several open source "cloud infrastructure" projects rely on libvirt's ability to do a lot of guest life cycle heavy-lifting.
And not least of all, libvirt provides critical layered security for QEMU processes (i.e. VMs) through Linux Namespaces, CGroups, sVirt ('Secure Virtualization', based SELinux), and more.
i'm looking at jinja-templated YAMLs with awe: how can anyone think it's a good idea? and yet, this insanity is everywhere nowadays. XML is verbose, hard to write manually and has many ways of representing the same thing, but it's actually designed compared to YAML which seems to be a hodgepodge of features, half of which need to be disabled in the safe variant.
Im sure there are better answers here but my main issue is that attributes and child elements offer duplicate functionality.
You could have <point><x>4</x><y>5</y></point> or you could have <point x=4 y=5 />. There is often no consistency within a single spec over how this should be done let alone between different specs.
JSON feels much more logical to me as well as being a whole lot simpler. If only it supported comments.
The functionality isn't quite duplicated. Attributes are order-independent and can't have child attributes, for example.
As far as I know, the original intention of the language designers was that attributes are for metadata and sub-elements are for data. For non-trivial schemas that form part of a data contract between systems or organisations, and/or are expected to evolve over time, I tend to stick to this approach. It results in more verbose data, but in my experience thats almost never a problem and can be an advantage if I have to drop into the data and actually read it.
For smaller-scale and internal schemas (e.g. internal tool configs) the terseness of e.g. <add key="foo" value="bar"/> definitely wins out over design purity for me. JSON would be equally good for this.
I work on enterprise integration and messaging stuff and I deal with a lot of XML data every day. JSON has its uses (particularly when you control both ends of the serialisation pipeline) but for me XML has a lot of advantages.
For those working in dynamic languages (JavaScript, Python, Perl, Ruby, etc.) JSON maps directly onto built-in language data types and structures making it trivial to work with.
With XML you usually end up needing custom code to convert to and from the parsed XML representation and the language's built-in data types and structures.
This is less of a concern for statically typed languages where you usually have to marshal data to and from your own structs/classes whether it's XML or JSON.
It's trivial until you run into a date-field. I've connected some JSON based exchanges to each other and just created an XML intermediary format to have a single canonical model and do the various translations to/from the other systems from there.
Your app still needs to know what is coming in, and convert that to its internal format.
Even if you want to get everyone to agree on XML Schema's datetime format, it's not always sufficient because sometimes you need the actual time zone (e.g. America/New_York ) rather than the UTC offset especially when dealing with recurring/far off events.
The biggest problem for me is when people try and squeeze it into something it was never really meant for. I'm thinking of things like ANT files or XAML. In both of those cases they are too verbose and difficult to edit by hand. They are building an object model that would be better expressed as code.
Take all of the top level elements. Call getElementByID on each with the same value. Combine all the answers into an array, eliminating the nulls.
You might expect that array to have length one or zero on all valid documents. You’d be wrong, and dangerously so for some XML schemas. You can use the same ID on every node and I don’t know of a parser that would balk at that. And yet every implementation will return the first node that has that ID, which will then change any time you descend into the DOM.
Why aren'y you just using xpath? There are mature and powerful xpath libraries in all major languages.
Of course if you use only one limited tool which was never meant to be the main manipulator of xml (getElementByID), then you'll run into problems. It is like never using regexp and complaining that simple strings are a bad data structure.
That just doesn't make any sense. Regardless of what functions are used "under the hood" (how do you even know that? Did you go through every xpath library? What do you even refer to when I use an xpath library in php to open an xml document. There isn't even a web page or "DOM" to speak of, there is no browser, no JavaScript, it's just xml.), the interface provides you with ability to select whatever you want from an almost arbitrarily complex xml document with 1-liners of xpath. This is not the same outcome as getElementByID-acrobatics.
Too bad it was initially envisioned as a text markup language, with tags sparsely strewn around the text, and not as a data representation format.
So, the syntax ended up both overly cumbersome (see closing tags) and festooned with logically unnecessary shorthands like node attributes. Then, the terror of entities.
XSLT is a brilliant language, I'd say the first pure functional language widely used outside academia (in 2000s), but, based on XML syntax, it's completely unfit for human consumption.
If only the authors of XML could get rid of the shackles of SGML compatibility, and went with a simple, uniform syntax, e.g. s-expressions, we could still be gladly using it. Now we reinvent the ecosystem instead, with JSON (sigh) and YAML.
I see no issue except from the entities. Which by and large have died anyway as those come from DTDs and not from XSDs. I do not see why closing tags are a problem: my editor will insert them and it makes parsing the XML much easier/less error prone. XSLT is brilliant indeed (and to my eye quite readable, but then again I also like regexp ;-)
You can’t specify that attributes are unique, which is why id is broken. And thus why XML DSIG, which uses id extensively, is exceedingly difficult to harden.
That does not make any sense. There are entire classes of XML parser that cannot implement getElementByID, such as streaming parsers. getElementByID is specified as part of the Document Object Model (DOM), not as part of XML. I don't think `id` even has a special meaning in XML.
JSON doesn't even have IDs, so in this particular regard the (similar) tool to what the poster uses doesn't even exist for JSON. So no, it would not be correct to say that in this particular regard JSON is somehow easier to parse.
We've build our own automated hosting infrastructure* a few years ago on top of libvirt. Using the libvirt API was a breeze and libvirt is rock solid since day one for us, I can only recommend this project.
We're using libvirt-go binding (Daniel Berrangé and his team is doing a excellent job maintaining it!), and KVM/QEMU hypervisior. For a small team like us, it's incredibly valuable to have access to such powerful tools in a such easy way.
Libvirt is really nice. For linux, I also recommend reading the man page of Qemu directly to learn more about internals, it helped me understand a lot. Of course you have to take care networking, disk management on your own but it can be really simple. https://linux.die.net/man/1/qemu-kvm
Might be true, but don’t see any open source work from Amazon in public domain which shows they built their own libraries from scratch to manage Xen and cloud management in early years from 2008.
Indeed it’s 2020 and yet to see any major open source work from Amazon (which has benefited a lot from open source itself using Perl, CPAN, C, Java, Linux etc.). In this respect IBM, google, Microsoft, Facebook and Apple are far better. Here even Oracle fare better due to acuisition of MySQL and sun microsystems.
I believe the major contribution from amazon might be hiring some of the open source developers to build proprietary systems. Those developers in spare time or weekends continue their open source project, but I do not have any study or articles on it.
Based on my information in 2013, amazon built their cloud using Xen hypervisor and related tools and libraries. Libvirt is one of the key libraries providing beautiful abstractions and language bindings to manage xen on Linux node at that time.
It will be nice if you can point to code from Amazon on low level library like Libvirt for cloud computing.
Amazon has been open-sourcing a lot lately. Firecracker (microVM service), Bottlerocket (Linux designed for hosting containers), and they've been distributing OpenJDK builds.
Perhaps not as much as Microsoft (these days), but they are certainly giving back.
for the last month (v6.6.0 and v6.7.0) libvirt has been released with a new key 453B65310595562855471199CA68BE8010084C9C (first seen:2020-07-20).
It hasn't been signed or verified by any other source in libvirt / redhat yet.
https://www.redhat.com/archives/libvirt-announce/2020-July/m...
That is preventing downstream distros, like arch, from admitting these new releases into their package repos.
I couldn't understand this: "Domain is an instance of an operating system (or subsystem in case of container virtualization like OpenVZ and lxc) running on a virtualized machine provided by the hypervisor"
So the domain itself is a virtual machine? What makes it different from other guest virtual machinse?
I've never used libvirt in combination with OpenVZ or LXC, but when I read that I interpret it as "a domain is a VM or a container" with some kind of VM running on the system to facilitate containers.
The article is seven years old, so I can imagine that this was how the project once ran containers, but it doesn't anymore. Reading the documentation[1] I don't the VM is relevant anymore for containers, at least. The docs say libvirt manages LXC container directly through the kernel API, so there's no VM to speak of there. The OpenVZ docs[2] also mention that the containers run on the host, not in a VM.
https://cockpit-project.org/
cockpit-machines is available in a recent version in debian backports, installing it is trivial, no configuration, https://hostname:9090/ and just works.
RedHat announced that cockpit will be the long term successor of virt-manager:
https://www.redhat.com/en/blog/managing-virtual-machines-rhe...
https://blog.wikichoon.com/2020/06/virt-manager-deprecated-i...
cockpit has frequent releases, latest:
https://cockpit-project.org/blog/cockpit-227.html
It hasn't all the features of virt-manager, far from it, but looks promising.