What I do: Chief Technologist, Engineering Lead, and Senior Full-stack Developer with over 15 years of experience (most recently as founding engineer and CTO at LifeWeb 360)
Looking for: Early-stage or growth stage startup solving real-world problems, with a solid team
Availability: Immediate
Why me: I prioritize delivering value over chasing trends (Shiny New Toys™), and I'm committed to building systems that are robust, scalable, and maintainable. I only shave yaks when absolutely necessary.
That's so close to a variant we once invented. There were 4 of us, 2 of us good at chess and 2 beginners. We played in teams, a good person and a beginner on each team. We took it in turns to move and you couldn't tell your partner ANYTHING.
As the good player, you had to come up with a good move for the board but also for what your partner might do next. Was fun!
What I do: Chief Technologist, Engineering Lead, and Senior Full-stack Developer with over 15 years of experience (most recently as CTO at LifeWeb 360)
Looking for: Early-stage or growth stage startup solving real-world problems, with a solid team
Availability: Immediate
Why me: I prioritize delivering value over chasing trends (Shiny New Toys™), and I'm committed to building systems that are robust, scalable, and maintainable. I only shave yaks when absolutely necessary.
The main benefit most people see right away is the Pydantic integration & it requires less boiler plate for basic API's. Ninja is essentially FastAPI + Django.
I prefer Ninja over DRF, but I know plenty of orgs who still love their class based DRF views as once you are over the (significant) mental hurdle of understanding all the abstraction there, it does give you the common CRUD type operations on your models "for free".
DRF has more abstraction. When I was new to Django I found DRF hard to build a larger API with it and not make mistakes or have things get confusing. You're primarily working by extending classes etc.
With django-ninja you just define your APIs with annotated types as methods, there is no magic, and then you get a generated OpenAPI spec.
this was my experience anyway, I used DRF for this project [0] and ninja for this one [1]
I haven't used django-ninja but to me it looks like the API is a bit nicer or more 'modern' looking (i.e. declarative via type annotations) and it's faster, both due to being based on Pydantic
DRF is old and API looks more like Django forms or class-based views, more of an OOP hierarchy going on, and DRF serializers are slow
Old is a harsh word, maybe mature would be a better fit, not everything new and shiny is gold, and yet not everything old sucks.
Not arguing here about types and Pydantic being faster than the built in ModelSerializers. However, for serializer speed improvements and performance in DRF I would advise dropping ModelSerializers and either going for Serializers or plain dict. Haki Benita has a beautiful article on that [0]. I was able to accomplish sub 200 response times on a fairly large response from tables that had tens of millions of records.
I think you have no objective reason other than your styling and rather personal preference for function based views?
Any terminal that has rich pane manipulation semantics can serve to take the place of tmux, this includes kitty.
The one thing kitty can't do, however, is have a remote session that keeps your workspace intact. I, and many other devs, purposely disconnect our development from any given machine we're on.
The majority of reboots or other disruptive behavior is because of desktop machines (for those of us that don't use Linux as our primary and only desktop OS); running the tmux session off another machine (even if its just a headless machine under our desk) minimizes disruption due to GPU/etc bullshit.
I want to add that I've been using tmux for the last decade.
I went from gnome-terminal, konsole, multiple remote ssh sessions (tmux on the remote side for long running sporadic jobs like repairing databases), Windows Terminal, iterm, iterm2, kitty, putty, etc etc etc. From ancient Ubuntu (from CD-ROM giveaway era), to modern MacOS releases, from several Windows releases.
Happy camper of wezterm for the last 6 months. Alacritty as secondary, Windows Terminal on my gaming machine (wsl2 for some stuff).
Between them: still the same tmux config, just evolving, changing plugins, colors of the status pane, adding starship.rs to the mix. Very tempted to try zellij harder, but not sticking to any of these terminal interfaces provided me lots of flexibility and consistency.
even locally, when the GUI hangs and i need to restart it (Xwayland still has issues), having most of my work in local tmux sessions saves a lot of time after logging back in.
I had this for a while, I suspect a lot if users are like me (and I assume you) in that the functionality they need from tmux is more or less just tabs and splits (tmux does more than this, but I don't tend to use those features) which kitty replaces pretty nicely.
Since I work across actual Linux and WSL I really appreciate having something that's terminal agnostic like tmux since it means I don't have to have a separate set up when kitty isn't available.
Discovering tiling window managers (it was i3, I now use sway) stopped me using it - I only really did for tiling I suppose, then discovered I could have that for other apps too.
Something to remember is that this is his pace _after_ pit stops, so his actual running pace was faster. Also notable is that this course was kind of terrible, with four 90 degree turns and three 180 degree turns each lap for 209 laps. There's little doubt that he could run > 200 miles on a course with fewer turns.
I downloaded .gpx of the run and it's 6.7MB.
He started with 4min/km pace, held it for 130km(9 hours) and then He "slowed down" almost linearly to 5min/km when he finished.
Strava also shares the latitude/elevation during the run. (Screenshot in case you dont have Strava https://imgur.com/PuuKo5r) What I find interesting and don't really understand is why the elevation keeps going down during the run, even though he stayed on the same track. Anyone have an idea why that could be?
The elevation in this case is just a proxy for storing the barometric pressure, which is because the actual air pressure is the important thing to track if you are trying to normalize against athletic performance. If the watch logged the GPS altitude, it would be a lot more accurate, but less useful as the air density would only be estimated..
All GPS watches already calculate their altitude (GPS uses three dimensional trilateration for computing the position, so you must get the elevation because you must calculate the three dimensional position). The elevation measure is usually pretty inaccurate in Central Europe. Since the satellites are comparably low above the horizon, flat movements of the runner have a large effect on the run time of the signals coming from the satellites, while vertical movements do not really change the run time and are therefore much more affected by any error.
This is why most portals like Strava run a data correction by overriding the elevation data of the GPS track by tracing the path over their internal elevation model.
The barometric elevation calculation is usually much more accurate and is constantly calibrated on the watch. This breaks down a little if you run for over 24 hours and the weather changes a lot. The drift over the 24 hours of this run amounts to about 8m over 2 hours, which doesn't really matter. And while air pressure can be used to normalize performance of the athletes, the absolute values measured by the barometer are usually pretty inaccurate, while the relative changes are quite accurate. Therefore, they can be used to measure climb/descent, but not really for an absolute air density measurement.
Source: I have worked on a GPS wearable (without barometric pressure sensor).
I didn’t know you could get a watch with more than 24 hours of battery life including GPS tracking (COROS APEX Pro). The accuracy is a bit weird though, the elevation chart doesn’t make sense.
I don't think it's that big of a deal for good running watches. I ran the JFK50 with a three year old Garmin 935 with a heart strap (HRM-Tri) in a little under 11 hours. My watch was close to 50%. That was with GPS updates every second. It would have been much better if I left it at the default, not the ultratrac or whatever they call the endurance battery mode.
It's really only been the last few years that running watches could last >24 hours with GPS tracking enabled.
I also have a Forerunner 935 which I've used for 24 hours, but I think I had it in ultra-mode where it takes GPS samples less often.
Before that I had a Forerunner 230 and I had to charge that mid race during a 100-miler, which was fun to do while running (used an external battery, put the watch and battery in a running vest while running one of my laps).
The latest watches which do 36 hours or more with full GPS accuracy are really amazing.
Apparently the reasoning is that what actually matters is oxygen partial pressure in the atmosphere, not true elevation. Measuring air pressure and reporting as elevation is a decent proxy for what athletes actually care about.
High-end Garmin watches as well. The fenix 7 can do 48+h including tracking, the Enduro2 will likely go 3 days.
And yes, elevation is bonkers, because GPS elevation via watch is about as precise as you guessing. (+/- 400 feet on Garmins, but I'm not sure there are any watches significantly better than that. )
I ran for 8:30 with my ancient Forerunner 235 and it still had some batteries left. Modern watches can easily do 24 hours, even the mid range Forerunner 255 can do 30 hours and the larger 955 can do 42 hours.
That's 36 hours of normal use, with a 60 minute workout[1]. My Fenix 6 lasts 8 days of normal use and currently tells me it can record a 14 hour run even with only 42% battery.
My Apple Watch with Cellular enabled is the only electronic doo-dad that I carry on my bike rides, even up to 100 miles. At the end of the night when it goes on the charger, I might have 20% left. That's good enough for me considering I seldom, if ever, leave the house with a phone.
And if one could ever get a cellular enabled watch without an iPhone, I'll be first in line.
Remote: Yes (North America or UK preferred), or hybrid
Willing to relocate: For the right opportunity
Technologies: Full-stack JavaScript / TypeScript (Node.js, Bun, React, Tanstack, Next, Jest, Playwright), Python (Django, Flask, FastApi), Ruby + Rails (including Hotwire), OpenAI + Anthropic APIs, GraphQL, Docker, Dokku, AWS / DigitalOcean, CI/CD, Tailwind, PostgreSQL, plain old HTML / CSS (not exhaustive)
Résumé/CV: https://www.linkedin.com/in/rnevius/ (full resume provided upon request)
Email: hello@ryannevius.com
---------
<wave> Hi, I'm Ryan! </wave>
What I do: Chief Technologist, Engineering Lead, and Senior Full-stack Developer with over 15 years of experience (most recently as founding engineer and CTO at LifeWeb 360)
Looking for: Early-stage or growth stage startup solving real-world problems, with a solid team
Availability: Immediate
Why me: I prioritize delivering value over chasing trends (Shiny New Toys™), and I'm committed to building systems that are robust, scalable, and maintainable. I only shave yaks when absolutely necessary.