Wow, those who could find their own research topic were lucky.
I've never seen anyone in my environment get that much freedom.
The supervisor sets the problem and the student must solve it.
Edit: I was under the impression that even the postdocs
are hired for a specific task.
I didn't expect to see Artemiev mentioned here.
Great soviet composer and his "Meditation" is a very fitting
track for the movie. Amazing blend of electronic and eastern traditional music. I didn't watch the whole video, but I
don't think he mentioned Jugalbandi.
The musician he asked to play for this track used to play mugams according to Eduard's words, perhaps that's why they say it resembles a certain mugam: Bayat Shiraz
https://www.youtube.com/watch?v=wrfGR70Y_YY
I think the author is missing a couple of important points:
1) How would one approach this task having a day job?
2) How do you compensate for not having a chance to work
in a suitable environment doing good research?
2a)Knowing and
understanding everything that has been done in the past
doesn't by itself make you a researcher. This (being a scientist) must be
taught by actual theoretical physicists. There are many
supervisors, but very few that can teach you to do research.
2b) Serious and worthwhile research is seldom done by one person. You need to collaborate or at least communicate with
other scientists in the field (and sometimes even out of the field). You have to be in the right
environment. Just reading arxiv won't quite cut it.
I don't work at home all the time, but I've been struggling
to find some metric to hold myself accountable for my own
quality and intensity of work. Do you mind sharing your experience?
Metrics didn't work for me other than a binary, honest assessment of whether or not I worked on a particular day 'in good faith'. I treat it as a negotiation between my present and future selves. When you resolve to do something in the future, you are telling your future self that you know better than they do. This often fails for people because you have a much better understanding of your intent and reasons when you make the decision than when it is time to act. To counter this I decided to strengthen my 'resolve', which I define as the strength of my decisions. If my decisions are strong, they are much more easily recalled and felt when it is time to act.
In my experience there has to be a balance though. I can't always live by the decisions I have made in the past. I have to have some time each week where I live purely in the moment, with no prior plans guiding my actions. This way my future self doesn't begin to resent my past self.
At first it is odd to reason about yourself as different people at different times, but now I find it odd I ever thought to project an unchanging version of my present self into the future, as if time was not a factor to be considered.
Integrating a compiler into a text editor to speed it up is
a bit of an overkill imo. What if a lisp source changes and needs to be recompiled?
What if several lisp files change? Does it mean that it also
needs a sort of a build system? This brings the joke
about an "operating system that lacks a decent editor"
closer to reality (or reality closer to the joke).
> Fortran has historically been noticeably faster than even C/C++ for numerical computations
Nowadays: faster - maybe, noticeably - I doubt it.
> supercomputers have better-optimized Fortran compilers than C compilers
I don't know about supercomputer compilers but mainstream compilers usually have the same backend for FORTRAN and C (as well as other implemented languages).
> created a great deal of high-quality legacy Fortran code that nobody feels an urgent need to port into C
Optimized and tested FORTRAN code - maybe, but not high-quality. I've seen some of it, FORTRAN makes it difficult to
write readable, maintainable code. For this reason
even scientist are rewriting their tools and libraries
(that also require good performance) in C++: for example see
Pythia, GEANT, cern root.
The thing is, you can write a perfectly normal fortran code, and instantly gain speedup (CUDA, distributed computing with OpenMP, etc) just by enabling some compiler flags. You can't do this in C/C++ as you have to deliberately write your program to use those tech. Also, vector/matrix operations are first class in fortran and you don't need to rely on 3rd party libs.
> The thing is, you can write a perfectly normal fortran code, and instantly gain speedup (CUDA, distributed computing with OpenMP, etc) just by enabling some compiler flags.
I'm not sure I understand you correctly. Can you give examples of such flags?
> Also, vector/matrix operations are first class in fortran and you don't need to rely on 3rd party libs.
It may be useful as long as you're hell-bent on not
using libraries (which is somewhat contrary to one of the
pro-FORTRAN arguments that FORTRAN has lots of libraries that are tested and ready to use).
This is a weak consolation though, since anything complex enough deals with custom matrix/vector types for sparse matrices or data types used in parallel computations.
Not sure about gfortran, but commercial fortran compilers supports automatic parallelization (e.g. intel fortran compiler -parallel flag [1]). You can even go as far as parallelizing you program across a cluster of machines via OpenMP by simply sprinkling some directive in your program to mark the code that must be parallelized via OpenMP. I remembered incorrectly about cuda. PGI fortran compiler supports CUDA but you still need to deliberately use it in your code, though there are projects that attempt to make this automatic (not sure if they're really took off).
> It may be useful as long as you're hell-bent on not using libraries (which is somewhat contrary to one of the pro-FORTRAN arguments that FORTRAN has lots of libraries that are tested and ready to use).
Yes, library is still used but it's typically only for data input/output. For example NetCDF is a popular data format and many fortran projects support the format via 3rd party library. But for complex matrix computation, this is essentially what fortran was made for so it's not typical to use 3rd party library for this. Most big fortran projects in the area I was involved with (meteorology and air pollution) uses minimal amount of 3rd party library and mostly rely on built-in fortran functionality, with optimization being left to the compiler (typically intel or pgi fortran). There is definitely code reuse, but it's in the form of the scientist collecting snippets of useful algorithm over the years and copy it to the project when they needed.
On a side note:
having (semi)automatic parallelization with code generation
for GPGPU would be very nice.
> There is definitely code reuse, but it's in the form of the scientist collecting snippets of useful algorithm over the years and copy it to the project when they needed.
Well, doing complex matrix calculation yourself in C/C++ without 3rd party library is hard. Unless you write everything yourself or specifically use intel MKL library, the benefit of enabling automatic parallelization on C/C++ won't be as impactful as in fortran where it's common do all calculation without using any 3rd party math library.
Could be, at least C++ has tools to implement better design.
I've seen one guy's python code that looks worse than his FORTRAN code.
There's a russian saying: "A true FORTRAN programmer can write FORTRAN code in any language".
One fun way to evaluate potential applicants for working on a C/C++/Fortran compiler backend is to ask them about Fortran. If they say that they appreciate how it makes it easier to optimize things, then you're probably talking to an experienced engineer.
Edit: I was under the impression that even the postdocs are hired for a specific task.