Hacker Newsnew | past | comments | ask | show | jobs | submit | dnadler's commentslogin

Franklin Templeton | Quant Implementation Lead | Boston, MA | In-Person | $145k - $190k + Bonus

We're building an end-to-end portfolio construction system to support FTIS, a $100bn multi-asset investment manager within FT. This is a front office role that will help guide our quantitative staff as we build out this new system. This project is a strategic objective for FTIS, highly visible, and frankly one of the most fun jobs in the industry.

We're looking for mid to senior level researchers or developers with strong buy-side backgrounds who can help translate our investment process into a robust, scalable, and approachable design.

Reach out with any questions - dan.nadler (at) franklintempleton (dot) com

Apply here: https://franklintempleton.wd5.myworkdayjobs.com/Primary-Exte...


I think maybe this refers to unlearning wrong information?


Also abstracting. No need to remember every milliseconds in its lifetime and consult them in every query.


I can remember for example when I was wrong and how and still responding correctly, I don't have to forget my wrong answer to respond with the correct one.


I have a similar mindset though less focused on the type of questions being asked and more about how many times I have to answer the same question.

Ideally, the number is one time. As in one conversation where the person walks away understanding the answer. If I have to have that conversation more than once it’s a problem.

Obviously there’s nuance - it can take time to get your head around a new concept or hard problem. But in any case, I like that as one dimension when thinking about a person’s skill/level/potential.


> I have a similar mindset though less focused on the type of questions being asked and more about how many times I have to answer the same question.

Yes, I completely agree and do that as well.

The focus on “type of question” has been something I’ve done more recently after helping someone out. Just reflecting on “what type of problem did I just help solve and how can I make it easier for them to solve on their own in the future”. Very often the answer is “more documentation” or similar, getting things only in my head down where everyone can benefit. On the other hand I walk away from some problems I’ve helped with frustrated that the answer was 1-2 Google searches away and the issue had nothing to do with “our stack”.


I mostly agree with you, but there’s another angle that is similar. How many times does the person come to you with a similar question but on a slightly different topic and you need to guide them through _how to find_ the answer. I’ve supervised mid level engineers in the past who will just drop a stack trace in a slack DM and expect me to tell them what’s wrong - I didn’t write the code so why do you expect me to figure it out for you. But when I have the conversation of “we’ve talked about how to dxooore these kinds of problems a few times now, next time you need to apply these techniques”,it often doesn’t land.


Actually, I think humans require much less energy than LLMs. Even raising a human to adulthood would be cheaper from a calorie perspective than running an AGI algorithm (probably). Its the whole reason why the premise of the Matrix was ridiculous :)

Some quick back of the envelope says that it would take around 35 MWh to get to 40 years old (2000 kcal per day)


I read an article once that claimed an early draft/version that was cut for time or narrative complexity had the human brains being used as raw compute for the machines, with the Matrix being the idle process to keep the minds sane and functional for their ultimate purpose.


I've read a file that claimed to be that script; it made more sense for the machines to use human brains to control fusion reactors than for humans to be directly used as batteries.

(And way more sense than how the power of love was supposed to be a nearly magical power source in #4. Boo. Some of the ideas in that film were interesting, but that bit was exceptionally cliché.)


I'd love to read that file. Of course, we're close (really close?) to being able to just ask an LLM to give us a personalized version of the script to do away with whatever set of flaws bother us the most.


One of the ways I experiment with LLMs is to get them to write short stories.

Two axies: Quality and length.

They're good quality. Not award winning, but significantly better than e.g. even good Reddit fiction.

But they still struggle with length, despite what the specs say about context length. You might manage the script length needed for a kid's cartoon, but not yet a film.

I'll see if I can find another copy of the script; what I saw was long enough ago my computer had a PPC chip in it.


> PPC chip

Pizza box? I loved the 6100.


Beige proto-iMac. I had a 5200 as a teen and upgraded to either a 5300 or a 5400 at university for a few years — the latter broke while at university and I upgraded again to an eMac, but I think this was before then.

Looks like there's many different old scripts, no idea which, if any, was what I read back in the day: https://old.reddit.com/r/matrix/comments/rb4x93/early_draft_...

I miss those days. Even software development back then was more fun with REALbasic than today with SwiftUI.


HA! I used REALbasic a bit back in the day, then spent my time comparing it to LiveCode, back then called Revolution. Geoff Perlman and I once co-presented at WWDC to compare the two tools.


You need to consider all the energy spent to bring those calories to you, easily multiplying your budget by 10 or 100.


A human runs on ~100W, even when not doing anything useful. It's entirely plausible that 100W will be enough to run a future AGI level model.


I think the big piece that is being overlooked here is the distance. The distance itself poses significant challenges. The obvious things like resupply and communication are much harder. But also the journey to mars is much harder on the human body.

Rescue and abort options are also much harder. The moon is close enough to easily resupply or rescue people on the surface, mars is much harder.


Completely agreed. Distance will impose substantial challenges, but the good thing is that that's really the "only" big challenge there is. I think many people have this mental model where the Moon is easy and Mars is hard, perhaps because we've already set foot on the Moon and so clearly it can't be that bad.

But if somehow both of these bodies were orbiting around Earth, Mars would be just orders of magnitude more straight forward than Mars, and I think it's relatively likely we'd already have permanent outposts, if not colonies, there. So the mental model of it being viewed as a stepping stone is somewhat misleading. The Moon is hard!

And also I don't think the distance will be that bad. We've already had 374 day ISS stays which is far longer than any possible transit to Mars (though nowhere near as long as a late-stage mission abort would entail) and the overall effects of such a stay were not markedly different than significantly shorter stays on the ISS. So it seems very unlikely that even a late stage emergency abort would be fatal.


I haven’t kept up with python too much over the past year or two and learned a couple new things from this code. Namely, match/case and generic class typing. Makes me wonder what else is new, off to the python docs!


thanks for pointing that out. Seems a thorough scan as features of whatsnew is due - since maybe 3.7?

For the record, ~language~ additions i found interesting (excluding type-hints):

  3.11:
   (Base)ExceptionGroups ; (Base)Exception.add_note() + __notes__
   modules: tomllib
  3.10:
   Parenthesized context managers 
   Structural Pattern Matching - match..:case.. 
   builtins: aiter(), anext() 
  3.9:
   dict | dict ; dict |= dict
   for a in *x,*y: ...    #no need of (*x,*y)
   str.removeprefix , str.removesuffix
   Any valid expression can now be used as a decorator
   modules: zoneinfo , graphlib
  3.8:
   Assignment expressions
   Positional-only parameters
   f'{expr=}'
   Dict comprehensions and literals compute First the key and Second value
  3.7:
   builtins: breakpoint()
   __getattr__ and __dir__ of modules
   modules: contextvars , dataclasses


Misc:

3.10 zip(strict=True) 3.11 asyncio.TaskGroup (structured concurrency) enabled by ExceptionGroup

3.12 itertools.batched(L, n) — it replaces zip([iter(L)]n)


There was also the walrus operator


that's the Assignment expressions, i.e. :=


True, I did not see that. My bad, sorry!


Oof, yeah, this site is really not great on iOS.

The first time I published a site, I was surprised by how much traffic came from mobile devices, even though my page was intended for desktop users. I really shouldn’t have been surprised, but fortunately I had some basic analytics and saw fairly quickly how bad my bounce rate was on mobile and was able to work on it a bit.


While that wasn’t my experience as a junior developer, this is something that I used to do with academic papers.

I would read it start to finish. Later on, I learned to read the abstract, then jump to either the conclusion or some specific part of the motivation or results that was interesting. To be fair, I’m still not great at reading these kinds of things, but from what I understand, reading it start to finish is usually not the best approach.

So, I think I agree that this is not really common with code, but maybe this can be generalized a bit.


> reading it start to finish is usually not the best approach.

It really, really depends on who you are and what your goal is. If it's your area, then you can probably skim the introduction and then forensically study methods and results, mostly ignore conclusion.

However, if you're just starting in an area, the opposite parts are often more helpful, as they'll provide useful context about related work.


> this is something that I used to do with academic papers

Academic papers are designed to be read from start to finish. They have an abstract to set the stage, an introduction, a more detailed setup of the problem, some results, and a conclusion in order.

A structured, single-document academic paper is not analogous to a multi-file codebase.


No, they are designed to elucidate the author's thought process - not the reader's learning process. There's a subtle, but important difference.

Also: https://web.stanford.edu/class/ee384m/Handouts/HowtoReadPape...


they are designed to elucidate the author's thought process - not the reader's learning process

No, it’s exactly the opposite: when I write papers I follow a rigid template of what a reader (reviewer) expects to see. Abstract, intro, prior/related work, main claim or result, experiments supporting the claim, conclusion, citations. There’s no room or expectation to explain any of the thought process that led to the claim or discovery.

Vast majority of papers follow this template.


The academic paper analogy is interesting, because code and papers are meant to do the exact same thing: communicate ideas to colleagues. Code written by a small group of competent programmers with a clear, shared vision is therefore a lot easier to read than code written by a large group of programmers who are just desperately trying to crush enough jira story points that they don't get noticed at the next performance review.

The difference is usually papers written that badly don't go into "production"--they don't pass review.

I usually read code top-to-bottom (at least on a first pass) in two ways--both root-to-leaf in the directory/package structure and top-to-bottom in each source file. Only then when I've developed some theory of what it's about do I "jump around" and follow e.g. xref-find-references. This is exactly analogous to how I approach academic papers.

I think the idea that you can't (or shouldn't?) approach code this way is a psychological adaptation to working on extremely badly wrought codebases day in and day out. Because the more you truly understand about them the more depressing it gets. Better just to crush those jira points and not think too much.


You're supposed to read academic papers from start to finish.


You're supposed to read the abstract, preferably the bottom half first to see if there are conclusions there, then proceed to the conclusions if the abstract is insufficient. Once you're through with that, you can skim the introduction and decide if the paper is worth your attention.

Reading start to finish is only worth it if you're interested in the gory details, I'm usually not.


I was taught to read the abstract, then the conclusion, then look at the figures, and maybe dig into other sections if there's something that drew my interest.

Given the variety of responses here, I wonder if some of this is domain specific.


It depends also on what you want to get from the article. Usually I focus on the methods section to really understand what the paper did (usually I read experimental papers in cognitive science/neuroscience). I may read parts of the results, but hopefully they have figures that summarize them so I do not have to read much. I rarely read the conclusion section and in general I do not care much about how authors interpret their results, because people can make up anything and if one does not read the methods can get really mislead by the authors' biases.


It’s interesting how many different opinions there are in this thread! Perhaps it really varies by field.

I was reading mostly neuroscience papers when I was taught this method as an undergrad (though the details are a bit fuzzy these days).

I’d bet it also varies quite a bit with expertise/familiarity with the material. A newcomer will have a hard time understanding the methodology of a niche paper in neuroscience, for example, but the concepts communicated in the abstract and other summary sections are quite valuable.


I learned very quickly reading math papers that you should not get stuck staring at the formulas, read the rest first and let them explain the formulas.

I would not say it should be read start to finish, I often had to read over parts multiple times to understand it.


It can be more than one thing - it's both, and more.


No it's not


You’ve convinced me!


Awesome! I'm glad we could come to an agreement


People in US suburbs frequently live several miles from their town center, if they even have a town center at all.

Cycling simply isn’t viable to large swaths of the US. There isn’t an easy answer to this either, given that you’d probably need to tear down entire neighborhoods and rebuild a more dense town


E-bikes fix most of those problems. A 10 mile trip, out and back, at 20mph, with no concern for loss of momentum at stop signs and lights is easily do-able by anyone who doesn't need mobility aids.


That fixes it for certain types of people, but certainly not for most people in in the north east during winter.


Most of the sprawl in America is not in the northeast. E-bikes are perfect for US suburbs.


> People in US suburbs frequently live several miles from their town center

It is not the responsibility of the citizens of the central city to accommodate suburban drivers.


In that case all the retail will go out to the suburbs and they’ll lose the jobs and have to drive out to the suburbs to go shopping. Or just lose the jobs to AMZN warehouses even further away.


That is clearly not inevitable which you can demonstrate to yourself by glancing around and pretty much any city on earth.


Well, commutes into a central city is a bit of different beast, and many metros in the US have trains that service the suburbs.

Driving in and around the suburbs is what I was talking about.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: