I'm also looking for a decent (ideally wireless) headset that works fine on Linux, with as good microphone as possible.
Currently I have
- wired Sennheiser PC 350 SE - mic quality is good
- Sony WH-1000XM3 - mic quality is mediocre
I'm quite happy with my Sennheiser, but now more and more often I have noisy environment around, so I would prefer to have noice canceling + be able to walk around the flat when I'm on conf calls.
My understanding is that right now, even with BT 5.0, I shall not expect a high quality codec as all duplex profiles (HFP/HSP) use low quality codes (ref: https://habr.com/en/post/456182), but I would be OK something with a dongle too, as long as it works OK on Linux. I was contemplating EPOS Adapt 560, but after watching many tests by CallOne (https://www.youtube.com/c/CallOneInc/videos) I'm just not too impressed. And also, I have no idea how those will work on Linux... Any recommendations?
I also have the Sony and have noticed the mic issues. But they seem related to Linux, as the quality seems ok when using my iphone. I've never tried it on Windows.
If you don't mind having different gear for conferencing, I use a Jabra Engage 75 [0] which works great on Linux. Lag is unnoticeable to me. Audio quality is terrible for music, though, especially compared to something like the Sony.
Maybe a closed ear set that isolates better? Or my personal favorite is in-ear for sound isolation without needing active noise cancellation. I'm not sure what the wireless world for those is like, outside of adapters. I do have a Jabra 75T pair I use with mobile devices, that does some sound isolation and recently added noise cancellation (haven't tried it). Their newer ones have better support for noise canceling (must include a chip for processing).
but surely those are super-expensive and I doubt all that effective when it comes to performance per watt. I'm actually a bit more excited about Power8 servers, but are there any non-IBM available yet?
I can fit 4 C7000 chassis in a single rack, which top out at an aggregate 32 TB of memory and over 2000 cores. 5 GHz cores running s390x microcode are not, FYI, twice as fast as Xeons.
It would be great if people who weren't familiar with the state of the art in x86 kit didn't blindly assume IBM et al's advertising was reality.
I'm no expert, but my understanding is that the IBM "z" platform has a number of interesting features perhaps not found in more typical server platforms/configurations, beyond just scaling out to loads of cores and RAM.
As with all IBM mainframes going right back to the 60s, the "z" systems are designed for continuous uptime, on the order of decades, and this is evident in many of the design decisions. For example, various subsystems and components have hot-spares available, so that even outright component failures will not cause downtime. Many components are then hot-swappable, even those that might not ordinarily be in other architectures, such as processors and main memory. No interruption to OS or application-level services is expected by hot-swapping such hardware. Across the useful life of the mainframe, most repairs and maintenance would be carried out without ever shutting it down.
I understand the platform also has extensive internal integrity checking built in, a potentially important factor for various types of jobs. Its auditing service is capable of detecting unusual conditions in various subsystems or jobs ("I've just picked up a fault in the AE-35 unit"), automatically retrying instructions on the processor if they executed anomalously. If the fault continues, the suspect processor is routed-around with no interruption to OS or applications, the job is resumed from last checkpoint on another processor, and the system phones home to IBM to log a service call. This monitoring is not being performed by processes running in userland or by the kernel, but is in fact baked into the hardware/firmware platform.
Furthermore, the systems can be configured with a variety of specialty offload processors or subsystems for tasks like encryption, key management, compression, and even logging -- which again might not be so commonly found on-board of some commodity servers.
(And, of course, even if you could put together an analogous solution with commodity kit, it's IBM! For the sorts of companies looking at a mainframe in 2015, having the IBM name on the SLA has got to be a pretty big part of the equation, right?)
Moreover, if you proposed to build and manage these sorts of capabilities from commodity x86 kit, I imagine IBM would claim that they'd have the lower TCO.
Whereas I have hands on experience with this stuff. So I guess I should thank you for giving me first-hand experience of being on the receiving end of that mansplaining thing people complain about.
> I imagine IBM would claim that they'd have the lower TCO.
Yes, they will. The funny thing is, the man from HP was in just last month explaining to me that Xeon Superdomes have a lower TCO, too, and last year the lady from Oracle was telling me how I shouldn't balk at the headline cost of ExaData and ExaLogic because, from a TCO perspective, they'd save me money.
People fall into the mistake of just comparing cores and memory specs between x86_64 and s390x. The hardware redundancy benefits are pretty huge. You really need a full proof-of-concept to get any picture of how your workload might run on System z.
NB your AE-35 comment, despite loud protestations there's a very good case that 2001 was written and directed as a sharp critique on IBM. Including IBM logos in numerous places.
No. Which workloads do you have that need more than 40 cores per image, are happy with a maximum of a hundred on, and won't run on clusters? (I hope it's not one that's going to be crippled by zVM's slow scan of large memory areas suspending guest execution).
> I/O offloaded
If you've actually worked with zLinux (I have) you'll know there's little effective offload, and that zVM overhead increases as the virtual IO ramps up.
> and five 9's
Is zVM offering sysplex? No. So you're relying on your single LPAR to be five nines? Never upgrading zVM? Never updating PR/SM?
Is the datacentre five 9s? The power? The network? Really?
> most rack cannot fit 10 TB of RAM and 140x5GHz cores
Any rack can fit 10TB of RAM and the equivalent of 140x5GHz POWER cores.
I can easily get ~1TB RAM and 64 2GHz+ x86 cores in 1U. A standard rack is at least 42U. In terms of density it's nothing special.
What makes these different is that you get it in a box that someone else will be maintaining, and where all the dirty work of designing and building a high availability redundant system has been done for you. Most people never build anything remotely as redundant as these things tend to be.
I'm guessing part of the appeal will also be super-wide bandwidth between the cores. Interconnect throughput and/or shared memory could make this significantly more interesting than a modular rack-based system for some workloads.
> I'm guessing part of the appeal will also be super-wide bandwidth between the cores.
If it's anything like previous-generations, that will be infiniband between books. The benefits on Z-class systems tend to be very big per-core, per-socket caches and, as the person you're replying to said, a lot of internal redundancy and automated failover (multiple backplanes with failover, spare memory and cores with failover, and so on and so forth).
The high-end POWER 7/8 hardware has incredible single core performance, beating the pants off Xeon. It uses huge amounts of power to do that, so it's not appropriate for all roles. Low end POWER is pretty niche. Freescale uses the architecture for telecoms applications.
In general Linux upstream these days "just works" on ppc64 & ppc64le. There's RHEL for POWER already, and IBM have loaned hardware to the CentOS project so we'll get CentOS on POWER pretty soon.
The licensing of (Open-)POWER is more open than x86 (but not as open as stuff like RISC-V), and there are several second sources for chips, in situations where that matters.
You get a pretty decent multiple of performance on power vs. intel. Probably something like 4-7x for common use cases.
Generally speaking, you buy the hardware because your workload needs the single core performance, or you're arbitraging vendor licensing costs for software.
Also, IBM's bread and butter is "peaky" financial services and gov't business, so they have business models that makes it work from a $ pov. You can buy a box with 100 cores, pay for 20, and lease 30 more for a few days to meet your peak demands for tax/christmas/billing season.
I've as a habit downloaded videos manually just to speed them up. When watching a video for it's information, I can usually, and not uncomfortably listen to it at around 1.7-1.8 depending on speaker. It takes a little getting used to though. I'll say that.
Memoryviews are the step-child of NumPy arrays. The buffer-protocol was the real intent and the memory view the forgotten "example" until a few brave and noble Python devs rescued it from obscurity. Memory-views are not that useful to one who will always have NumPy installed.
Currently I have - wired Sennheiser PC 350 SE - mic quality is good - Sony WH-1000XM3 - mic quality is mediocre
I'm quite happy with my Sennheiser, but now more and more often I have noisy environment around, so I would prefer to have noice canceling + be able to walk around the flat when I'm on conf calls.
My understanding is that right now, even with BT 5.0, I shall not expect a high quality codec as all duplex profiles (HFP/HSP) use low quality codes (ref: https://habr.com/en/post/456182), but I would be OK something with a dongle too, as long as it works OK on Linux. I was contemplating EPOS Adapt 560, but after watching many tests by CallOne (https://www.youtube.com/c/CallOneInc/videos) I'm just not too impressed. And also, I have no idea how those will work on Linux... Any recommendations?