Interesting specs for an IoT device. It looks more powerful than most current home PCs or laptops. How would you feel about hosting a small cloud or Kubernetes cluster on your coffee maker?
up to 16 low power x86 cores at up to 2.2GHz
fully support Intel’s VT-d hardware virtualization
up to 64 GB of single-channel DDR4-1866 or DDR3L-1600 ECC memory
PCIe 3.0 x16 controller (with x2, x4 and x8 bifurcation),
16 SATA 3.0 ports
four 10 GbE controllers
four USB 3.0 ports
different TDPs starting at 8.5 W
With 4x10GbE controllers, I almost want some sort of government-mandated firewall to be built in.
> How would you feel about hosting a small cloud or Kubernetes cluster on your coffee maker?
The thing I expect these to lead to is for everyone's router to become an implicit home server, and for that to lead to changes in the way we write "cloud" services.
Right now, a lot of background "checking up on things" for services like group chat apps, happens directly on people's phones, because they're (almost) always on and (almost) always connected to the Internet.
Now imagine that it's guaranteed that everyone in the world has an implicit personal (or family) server their phone can communicate with, and can automatically reach over and launch processes on. All the background logic would move to the server. Phone apps (and a lot of computer apps) would become simple thin-clients that talk to your home server.
It'd be like IRC bouncers, but for everything, and for everyone.
Start a multiplayer game? The lobby server launches automatically on your router.
Install a "free VoIP service" app? You've actually just installed an Asterisk server on your router; the app just configures it and acts as a softphone for it.
Want a blog? The button on WordPress.com doesn't start up a LAMP container in their cloud; it starts a LAMP container on your router.
The extremely asymmetric nature of cable modem internet connections will make this tough when you're not at home: your house will upload responses to you at a crawl.
That, or the "home server" component will be a separate mode / augmentation for what are still primarily cloud-based services.
I'd much rather locate my personal server (or VM) in a real datacenter and behind a better SLA than my flaky Comcast.
Cable is bad (for logical reasons), but DSL services are alright. My local telco just came out with a synchronous 150Mbit plan at not-unreasonable prices. I could certainly host a few servers with that.
I think you're still right, though: the larger picture these things will slot into is as subsidiaries for what IBM has termed "cloudlets": automatic device-launched containers, operating on ISP commodity-IaaS clouds (i.e. at the "edge" in the same city the user is in, where all the CDNs and caches and mirrors they access already are.)
Cloud services would probably be written with cloudlet service-worker containers in mind, and then the home-server hardware would just be a latency-optimization, "stealing" workloads from your cloudlet to your LAN when all the traffic is just going to be over your LAN (e.g. for a home streaming service like Plex, or a game server at a LAN party) and then moving them back if you try to speak to the cloudlet from outside your LAN.
Cloudlets are the more important part of that vision, but I could see the home-server part of the design coming first, just because so many companies seem interested in putting devices into home networks that could run them (e.g. the Apple TV and its class of products; the Amazon Echo, Google Home, and other "voice boxes"; most "smart home" device hubs.)
Nah. Home internet connections are still terrible. 2017 now and we still don't have IPv6 on residential connections (I'm on Wave and they said IPv6 deployment is still over a year away).
Home internet connections in the US are terrible. Such a shift doesn't have to start in the US (though American companies would probably be the ones making the products.)
I think IPV6 in the US is regional. I'm in the Atlanta area and I've been on IPV6 for at least two years with Comcast. This is a residential line, not a business line.
This is the result of checking my connection for IPV6 compatibility:
Assuming you weren't being sardonic, any form of "government-mandated firewall" would be worse than useless, and doesn't even really make sense in this context.
These are fairly anemic processors so I suspect you would be disappointed trying to run a desktop OS with one. Let alone go head to head with a modern PC.
I'm posting from an older Thinkpad with a Core 2 Duo. Comparing this to a C2750, they appear comparable[0]. This system runs Windows 10 fine, and so I don't see why the C2750 (and C3000) would not as well. Granted, an i5 outperforms them both.
In single thread performance the ancient core 2 is going to destroy the Atom, even at the same clock rate. The only thing good in the atom is the large core count [1] which makes up for the anemic single core performance, which is nice for small servers but pointless for desktops.
[1] from a performance point of view of course. It probably consumes significantly less.
"older" doesn't even begin to describe it, mind you. These are ten years old. A six year old Sandy Bridge (T420, X220 that series of ThinkPads) is going to wipe the floor with these Atom CPUs.
Do you have any tips on how? At work I have a brand new Dell Precision 5510 with Win10 that I had to upgrade to 32 GB RAM when using two full-HD external monitors (internal 4K display off) running Firefox and Autodesk Inventor, because on 16GB it would randomly crap out and bluescreen (with the new useless "sad smiley" type BSOD). Mind you, FF and Inventor were only using ~6GB RAM total.
Er... I can't help you for I have never used a computer with more than 4 GB. And I was running dual monitors in the days when 256MB was a very decent setup. You had to upgrade to 32 GB RAM?
I guess I should distinguish between when I had bluescreens, and when Win10 crashed the display manager saying "your system is low on memory". And only when running on dual monitors, never on the internal 4K display (which oddly enough has more total pixels). The "out of memory" errors led me to upgrade to 32 GB.
Bluescreen == unrecoverable kernel corruption == either bad drivers or failing hardware. If replacing the RAM fixed it, your old RAM was dead. Nothing to do with the capacity
Do you have the Samsung NVMe driver installed? I had a similar issue on a fresh Windows 10 install that was solved by installing the driver directly from Samsung (the one for the 950 Pro works if I remember correctly).
Make sure your drivers (and BIOS) are completely updated.. when I went to a GTX 1080 for a 4k display, I had a lot of weird issues... they all went away with a BIOS update of all things. Have to say, being able to trigger a BIOS update from a windows software was a weird experience... more scary when you come from having to create a dos boot disk with everything needed to update something.
With a SSD, I would think they would be acceptable for light tasks.
I've got a 4 core now running ESX and it works fine for the most part, having a 16 core would be interesting to step up to to have more things running.
They aren't that high... there's tooling costs (which are pretty fixed) regardless of if you sell 100 or 100000 devices. You also have some pretty high software costs to ensure everything works together well. There are far fewer NAS boxes sold than laptops or desktops, so they need higher margins to cover other fixed costs.
My last round of system upgrades (htpc and desktop) two years ago were driven by lowering power use... I retired a homebrew freenas box using an FX-8350, and was surprised at the change in the power bill... my htpc at the time was a similar cpu, and my desktop, I don't recall what it had at that point. In the end went with a couple i3-5010u's for htpc and spare, and an i7-4790 for my desktop (usually idle)... man did that make a huge dent in my electric bill.
Planning on a new NAS sometime this year, so hopefully will see something with enough oomph to do on the fly transcoding from h.265 to whatever my devices need. I've been recoding most of my media to h.265 for space savings, but wouldn't mind being able to direct-stream it to more things. Still think it would be cool to do a media server that targets chromecast devices.
I bet the Intel chip will appear to beat competitors in benchmarks until the inevitable discovery that Intel gimped the compiler/benchmark or some similar tomfoolery.
I'm highly suspicious that these could end up in autonomous cars for anything other than head-unit duty, and you don't need anything as powerful as this for that task.
They'd have to have built the thing to a standard strict enough to be a component in an ASIL-D rated system, which there's no evidence has actually happened.
Unless they just mean for autonomous test mules and/or captive fleets.
For some reason I thought they were abandoning the 3000 line. The specs certainly make for a great NAS device, 16 SATA ports is a healthy amount of storage behind dual parity RAID. 64GB of ECC RAM that works, and presumably you can peel off a PCIe x4 lane for boot media in an M.2 slot. I could see replacing my current FreeNAS box with that if it continued to be as quiet.
Online.net currently uses the 8-core Intel C2750 Atoms for some of their dirt cheap dedicated servers. I hope to see them offer some affordable 16-core Atom servers soon.
Minecraft makes use of many cores and my kids are complaining about server lag :-)
Maybe something changed very recently, but as far as I know Minecraft really doesn't make efficient use of multiple cores. Almost all game logic happens on a single thread, so getting fewer, more powerful cores would be a much better way to reduce lag.
I think perhaps the "Scaleway" brand name is less about what their users can do with their products, and more about what their business model is.
As for their hardware, I believe they have designed all of the physical servers themselves, so they'll contain nothing they don't need. https://www.scaleway.com/features/hardware/
Asking people with hardware background - How hard would it to build a router with NAS, API to talk to Nest etc, smoke alarm, security cam? And what would be the cost of such a device?
It's probably going to cost you time, more than hardware, I'd honestly suspect? Any simple Atom-based NAS should really have enough "oomph" for like, a REST HTTP API that you can talk with. I have a 4x core C2000 about a foot from me that would do great. You can piece one together for a cheap few hundred $ (like $300 can probably get you something decent). But you're going to have to program all the shit together, that's consuming.
I've been looking at this for my home. Realistically I've just been spending a lot of time so far just modeling my database schemas so I can eventually put APIs on top of it all to talk with.
Mostly just make sure you pick some devices you can think you can interface with, have been reverse engineered, or get ready to reverse it yourself I suppose (which might be easier than you think, considering how bad embedded security is)...
I did something similar (not a router, but we considered it), and its not that difficult on the hardware side. Our home automation hub was just a Raspberry Pi 3, and had more than enough power.
If I were doing this just for my own home I'd probably get a half decent NAS capable of running Linux and then run home-assistant.io on it rather than building everything from scratch. Home Assistant gives you a decent UI, integration with almost any hardware you can think of, programmable automation, and an API to build against if that isn't enough.
For the router you could look into PFSense. I absolutely love it. Use it both professionally (as a firewall/VPN endpoint/access filter/upstream load balancer) and for home setups.
On the NAS side there is FreeNAS, however I don't know the two would interact or if you could directly install both packages into one server (I don't see why not though, even though PFSense probably wouldn't recommend it).
If you're into tiny PCs, COMs, SBCs, etc, I've added http://linuxgizmos.com to Feedly list and find that it's a decent way to stay current with that market.
I also like http://www.cnx-software.com/ for this, but I don't think there are any ARM boards that approach the power of these (or the previous C27xx) chips.
The $27 price is for a dual core chip, not a 16 core one. If Avoton prices are anything to go by then expect higher core count chips to be considerably more expensive. Intel is always careful to make sure Atom products don't cannibalize their higher margin Core CPUs so don't expect these to ever be a win from a CPU perf-per-dollar standpoint.
Intel Atoms are utterly useless for that. I'd wager they are useless for most anything that isn't memcpy-bound, so I wonder what makes them put 16 of them into one package.
For memory-bound apps are they sort of useful? For instance, A packet forwarding system that reads a packet, does 3 main-memory lookups, then forwards or drops?
Hard to compare against the Cavium Octeon. x86 general purpose vs MIPS64 with dedicated offload hardware...really depends on the use-case. I have an ERLite and AFAICT it competes with stuff way above it price-wise, e.g. https://store.netgate.com/ADI/RCC-VE-2440-board.aspx
I'd rather run PFSense, but it's not worth an extra $200+ for just my home use.