> But there is a lot of work for the compiler here, wow. Knowing the maximum number of registers that is needed for any function call made within a function? Ouch.
That shouldn't be too difficult. The compiler is already type-checking the parameters of every call within the function. Remembering the highest count won't take much more work, and it's capped at 8 anyway.
> Support for multiple return values is cool though. That'd be incredibly nice.
Agreed. So many processors seem designed just to run C. Then when something extra like multiple returns appears, it goes unused.
> That shouldn't be too difficult. The compiler is already type-checking the parameters of every call within the function. Remembering the highest count won't take much more work, and it's capped at 8 anyway.
Many of the accounts that I've read indicated the largest issue with Itanium was building effective compilers.
I recall learning that the important part of the optimization phase for Itanium was based on runtime performance analysis, not just static optimizations.
That's true, but figuring out the maximum number of arguments passed to another function is trivial; I would think it isn't harder than determining how much space one needs for local variables in an Algol-like language.
That's definitely true, but counting function parameters was only a small part of the difficulty. Weren't most of the problems related to optimization being harder and less effective than predicted?
That's what I get for citing Wikipedia. But the idea was also brought up by actual southern politicians, particularly in the discussions at the end of the Mexican War.
A detector is typically a series of concentric cylinders, with the beam pipe, where the collisions occur, running through the center. The inner layers are tracking chambers, which detect the paths of charged particles. This is what produces all the curved lines radiating from the center.
The outer layers are calorimeters, which catch particles and measure their kinetic energy. As you correctly assumed, these produce the bar plots. Often there will be one layer of calorimeters for photons and electrons, and a second for hadrons (protons, mesons, etc.)
Very helpful, thank you. Do you know if they have the ability to plot the various events of the collision by time, such that the bar plots could be shown appearing one after another as collision events are recorded?
I'm not familiar with the nitty-gritty of the LHC experiments, but in general the various detector subsystems have different response times and rate capabilities. There are certain types of detectors with explicit time granularity (e.g. time of flight) but for most detectors there would be no time structure within a single collision record ("event").
But you could work backwards to make an animation for each event. From a scientific perspective it's not that interesting though.
Signals are too brittle and complex. That's why you have to read three whole manpages to figure out what happens if a process gets the same signal twice in rapid succession.
They are also un-Unixlike: they are used to communicate three or four different kinds of information, and they do most of them badly.
Exactly. Signal mechanism makes sense to notify synchronous errors that arise from the thread's own execution, like SIGSEGV or SIGILL or SIGFPE.
Most of the rest of the traditional UNIX signals are events that should be communicated asynchronously via file descriptors that a process can poll at its leisure, which would be more UNIX-y.
Systemd is why kdbus is even being talked about (personally, I think it's a crazy idea, but then I don't use dbus even on my desktop), and the two projects share a lot of developers.
There's a lot of good advice already, but here's one tip:
Learn how to read documentation. Consult man pages and official documentation before resorting to random people on websites. This is a skill that requires practice, because a lot of the material is mediocre. Some writers give overviews, some give examples, some list every feature. You may be more comfortable with one kind, but learn how to digest each one and extract the knowledge you need.
The world probably needs less content creation. We're long past the point where there are too many useful books to read in one lifetime. Even in narrow fields, we're producing comedy television, historical fiction novels, or cat pictures faster than one person could consume them all. You could spend a lifetime just researching what's worth consuming.
With this kind of overproduction, it's no surprise creators can't make money.
It's funny how this was noticed by King Solomon, far before the age of print. "Of making many books there is no end, and much study wearies the body." (Ecclesiastes 12:12b). Apparently information overload was a problem even back then.
Anyway, I agree that we have too much content, but not content in general. We have too much for-profit content. I can't back this up with numbers, but I'm willing to bet that the majority of content creation is just an indirect form of advertising.
Take news sites. Why all those articles are so crappy, full of mistakes, and often outright lies? Because news sites don't care about the truth; the content is just the way to make you see the ads.
Or take all those images people repost on social media. I have a close friend working as a "content marketer", so I get to see first-hand how those images are made. There are people whose job is literally to create dozens of such pictures every week and post them on the pages they manage. The pages themselves are only tangentially related to what is advertised.
The whole scheme works like this: imagine you want to advertise a fitness club you own. So you create a bunch of Facebook pages about dieting, general health and exercise, and you hire a bunch of people to make you an image or two for each page, every day, and to spread them around in a way that links back to said pages. The images are usually pretty useless; the content doesn't matter as long as people share and "like" it. The idea is that you will waste a little bit of lots of people's time hoping that a percent of a percent of those people will convert and pay enough to justify all that social media carpet-bombing.
Frankly, I find this scheme evil, but for some reason it is a respected occupation nowadays...
The problem we really need to solve is wasting other people's time to make money. Content overload will fix itself then.
Imagine a futuristic utopian human society. Are the people writing poetry, playing instruments and painting murals or are they assembling widgets in a factory?
Humans are really good at creating things, that's what I think we are here to do, not "work".
Most of the content nowadays is created as "work", usually for advertising. In the futuristic utopian society we would get rid of all that crap, and I'm willing to bet we could let people create however much they'd like and we still wouldn't have information overload problem. It's easier to manage your information diet when everyone around you stops trying to push their content into you.
This is sort of an easy thing for djb to say, but the truth is that most people are not good at creating things. Most people are barely useful for "work". The average keeps getting higher, but half the population is still and will always be below-average.
Maybe in a future utopian society everyone is somehow above average, or there are radically fewer humans that just are really smart, but right now most of the people are not going to create top-tier "content". And there are enough people creating top-tier content, and enough top-tier content in the history of human art, that I don't need any more content. Who is going to pay them? Why should I? Why should the state? Why is this is a thing we should be buying? I can think of at least ten better uses for my money.
The average keeps getting higher and the top-tier improves because of non-incremental creative leaps, achieved by statistical outliers in the new-blood people who create "more content". Large sample sizes are needed to yield the good stuff.
But if you're truly a talented artist or musician, you can always make a living. It doesn't need to be easy and it doesn't need to be possible for everyone to "create content."
If good material can be discovered more efficiently, there will be more opportunities for recursive inspiration. In that case, there would be systemic benefit from ease of discovery, which may require making it easy to create content that guides discovery.
That is a ridiculous argument. The more content we produce, the more quality and diversity you can get in the corpus that you will read; the more choices that you can make. There might be infinite insight in the contemplation of cat pictures. :)
Or from another angle: why don't we destroy half of the content of the world? Surely we have enough with the remainings. If we were to find tomorrow another intelligent species, would you be interested in their culture? Or won't you have time to read it?
What you said is hypothetical, but the question we face today is "should we put limits on our ability to transmit and receive information in the name of incentivizing content creators to create when we already have many lifetimes worth of content?"
Right now there are limits on what we can communicate, limits placed in the name of "protecting content creators." These limits are a kluge. For example, what if I where to take a song, compress it a ton, then read out loud the base64 representation to a friend who is transcribing it.
Have I infringed the copyright of that song? Or was I simply describing it so precisely that I allowed my friend to reproduce it flawlessly? If I sing the lyrics to my friend and he learns that, are we breaking a law? And more importantly, should we be breaking the law when we do this?
Why can't we communicate whatever we want? Why haven't we accepted that the progress of technology will slowly make all things a matter of "communication"?
I think that part of the answer to that question is a deep anxiety over such a profound shift in the "business" of our society. It would mean much would have to change, and while I'd argue that change would be for the better, I understand that anxiety.
Yes, more content adds more value. But past a certain point of saturation, it no longer adds enough additional value to compensate its creators. My argument is that we are long past that point.
Which would be more valuable, doubling the world's content, or doubling the amount of time and money to spend on it?
Let's say we cut "literary production" in half somehow. Is it better if everyone writes a little bit, or if we all pay someone who's really good at it to do it? I'd rather read a professional writer than my own ramblings, which means I'd rather a subset of good writers were able to earn a living off it.
> The more content we produce, the more quality and diversity you can get in the corpus that you will read; the more choices that you can make.
If so, then we should just spend all day reading /dev/urandom. Infinite content there.
> Or from another angle: why don't we destroy half of the content of the world? Surely we have enough with the remainings.
If we destroyed the half that is advertisement in disguise, the world would be much better off.
> If we were to find tomorrow another intelligent species, would you be interested in their culture? Or won't you have time to read it?
I would, but I'd happily limit my time spent on analyzing ways members of that culture abuse one another and look for something interesting to learn from them.
> a C to fpga compiler which you would suspect could do some crazy things and took thousands of engineering hours to make work. But instead it just implements a CPU in the FPGA
Is that seriously how Vivado HLS works? Now I'm glad I decided not to buy it.
How well HLS does highly depends on the source code (this is in general and not specific to Vivado HLS). If your code is a simple loop over an array and you add a vendor-specific #pragma directive (such as #pragma unroll) the tool will unroll your loop and extract the parallelism from there. This actually works quite well in practice for regular DSP code (like FIR and FFT) and floating point. Anything else is another story though.
The thing is that unless you're writing your code as the tool expects it, with the proper pragmas etc. there's no way it can be transformed to fast hardware. A way around that is for vendors to ship "customizable IP" kind of like Altera's Megafunctions... so much for portability and high-level.
I'm not sure which tool OP is referring to though, I remember Altera had a C2H tool that they discontinued in favor of their OpenCL SDK.
That shouldn't be too difficult. The compiler is already type-checking the parameters of every call within the function. Remembering the highest count won't take much more work, and it's capped at 8 anyway.
> Support for multiple return values is cool though. That'd be incredibly nice.
Agreed. So many processors seem designed just to run C. Then when something extra like multiple returns appears, it goes unused.