Once you're past the fundamentals, if find yourself interested in high-performance networking, I recommend looking into userspace networking and NIC device drivers. The Intel 82599ES has a freely available (and readable!) data sheet, DPDK has a great book, fd.io has published absolutely insane benchmarks, ixy [1] has a wonderful paper and repo. It's a great way to go beyond the basics of networking and CPU performance. It's even more approachable today with XDP – you don't need to write device-specific code.
Unfortunately I don’t have any recommendations to give, my experience starts and stops at application development. Though I would love to spend some time in a datacenter!
Potentially have a look at Infiniband and Clos/fat tree networks?
My more generic recommendation would be to explore semantic scholar for impactful/cited papers, look for some meta analyses, and just dig through multiple layers of references till you hit the fundamentals (typically things published in the 80s for a lot of CS topics).
Do you have any career advice for someone deeply interested in breaking into high performance GPU programming? I find resources like these, and projects like OpenAIs Triton compiler or MIMD-on-GPU so incredibly interesting.
But I have no idea who employs those skills! Beyond scientific HPC groups or ML research teams anyway - I doubt they’d accept someone without a PhD.
My current gameplan is getting through “Professional CUDA C programming” and various computer architecture textbooks, and seeing if that’s enough.
Given that CUDA main focus is C++ since CUDA 3.0, ignoring the other PTX sources for now, not sure if that 2014 book is the right approach to learn CUDA.
Can you elaborate a bit on how C++ affects the programming model? Isn't CUDA just a variant of C? I presume it is not the goal to run standard C++? Also as I understand it PTX is an IR so not sure why C/C++ can be compared?
Not at all, unless we are speaking of CUDA until version 3.0.
CUDA is a polyglot programming model for NVidia GPU, with first party support for C, C++, Fortran, and anything else that can target PTX bytecode.
PTX allows for many other languages with toolchains to also target CUDA in some form, with .NET, Java, Haskell, Julia, Python having some kind of NVidia sponsored implementations.
While originally CUDA had its own hardware memory model, NVidia decided to make it follow C++11 memory semantics and went through a decade of hardware redesign to make it possible.
- CppCon 2017: Olivier Giroux "Designing (New) C++ Hardware”
This is why OpenCL kind of lost the race, with it focused too much in its C dialect, only going polyglot when it was too late for the research community to care.
I disagree, “req/res body cookies that’s it” is not the underlying system. It’s an abstraction over the TCP/IP and HTTP stack.
What happens when you need to disable nagles, avoid copies of the request body, use websockets, gRPC, etc? You’d need to pray that the framework gives you an escape hatch.
I might be doing it wrong, but I just approach those as new pipes to be put onto the system? they're after all basically different ways of transmitting data from point a > point b.
Why would you choose a framework that didn't provide a escape hatch for your use case is the real question tho.
It's very early stage, so I expect I'll finish v1.0 in about half a year. I have to work on my Msc project as well. I'll post it on linkedin, so you can follow me there. https://www.linkedin.com/in/petr-kube%C5%A1-a9a48212a/
That does not do what you think it does: that’s the button to disconnect from currently connected devices, not to turn off radios. Similarly, the airplane mode button above it will maintain Bluetooth.
Turning Airplane Mode on and then tapping the Bluetooth icon will fully disable Bluetooth (gray background, not white). You can also just disable it in Settings. Apple’s argument, as I understand it, is that Bluetooth and WiFi don’t actually consume much battery, but non-technical users would habitually disable them to save battery and then complain that Location Services didn’t work well. Hiding the setting but keeping an essentially useless toggle prevented those non-technical users from making their device function worse while still letting them feel like they were saving battery life.
Yeah but they forget about users who simple want to disable yet another tracking vector on their device.
To add insult to injury, when you automate WiFi and Bluetooth to turn off when you leave your home you need to manually affirm this action via a notification every time.
I created a Shortcut for that, essentially a button on my homescreen I can press which actually disables bluetooth and wifi. And when my phone is connected to wifi, mobile data gets disabled.
It's not perfect, but at least I don't have to open the settings app each time.
They sell an API product, of which documentation is an absolutely core, user-facing part of their product. It doesn’t make sense not to own something that differentiates your products value proposition.
You have no choice but to spend all your 24 hours in the day. You can't just skip over 7--9 AM and then have two hours in your back pocket for when you are short on time.
So no, you never just have time you're not using anyway. You're always using all of your time. On what is a matter of priorities.
However, if you would spend 19--21 watching a TV series, or twiddling your thumbs, you can choose to take that time to do something else.
Having your caching set up I correctly is verrryyy easy to do. There’s lots of things you can miss, and don’t realise until a whole lot of traffic hits you
[1] https://github.com/emmericp/ixy