Hacker News new | past | comments | ask | show | jobs | submit | swansonc's comments login

I looked at the project, and the editor doesn't look to bad, but why, oh why, yet another markdown-ish format? Did you REALLY need to do that? There are multiple markdown flavors, and, if you want something a bit more 'bookish' there's a nice ecosystem around asciidoc(-tor). Did you REALLY need to introduce another markdown?


Hi Jeff - if there was a single solution that fit all requirements, then that would be obvious :) However, experience has pointed out that there are differing sets of requirements. Some folks may need the flexibility (say disjunct fabrics) that encap provides, while others need the scale (say 1000's or 10K's of servers) or simplicity (those sort-of go hand-in-hand) that a non-encap data-path provides.

The big question that a system architect needs to ask, if they are designing a system at scale is not "should I use this technique" but "do I NEED to use this technique." We can always add more complexity and technology/layers than we need because we "may need it" in the future, and we almost always end up with a jinga tower when we are done.

So, when laying out your infrastructure, be sure to know what your actual requirements are, and don't add a lot of extraneous capabilities that you have to maintain and trouble-shoot later.


I want to make one edit, I should have said "need the flexibility" instead of "may need the flexibility". Both the scale and the flexibility can be hard requirements. I don't want folks to think that I am saying that scale trumps flexibility/disjunct fabrics. They both are equal, if that is the environment that you operate in. Again, full disclosure, I'm with the project calico team.


It depends on how you use DPDK. If I use it from the container directly to the NIC, you certainly do loose all of the kernel capabilities. However, we believe (but have not tested) that you can use a DPDK virtual interface in the container/vm (memnic or virtio) that connects to the DPDK driver in the kernel, so the path from the container/VM is 0 copy. The kernel then does it's processing, and then, another DPDK path could (potentially) be used to 0 copy the traffic to the NIC (really uncertain about that last stage). Basically, you are just using DPDK to save on the copy cost.

This is all academic until tested, btw. As of yet, we (on Calico) haven't had anyone stand up and say that they need more performance than what the native data-path we use today is capable of delivering.


Now that sounds interesting. I'd love to read about that if you ever do move the idea from paper to production :)


We'll let you know.


Thank's for the IPv6 love on Project Calico, Justin. Have you been testing Calico's v6? If so, we'd love to talk to you (disclosure, I'm on the project calico team).


Not yet, planning to give it a go soon...


drop me a line when you do. Containers or OpenStack? cdl (at) projectcalico


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: