This isn't linear interpolation, but it is a discrete approximation of a linear differential equation (note that all linear functions can be described as lerps, but that's uninteresting in this case):
x'(t) = 0.2(b - x(t))
where b is the final position, x is the current position as a function of time, and t is time.
Solving this linear differential equation gives us:
x(t) = b + e^(-0.2t)
which up to a change in constants is what easings.net would call easeOutExpo [1]. So the usual way you would describe this "lerp" in computer graphics would be "exponential ease-out". This solved version of the ODE also gives us a function that isn't dependent on the framerate, so in practice the direct form with the exponential function is preferable for implementation (unless you're in a very constrained environment).
Good explanation. But one thing that might not be obvious is that "b" is a moving target in the article (and in many cases where this technique is used), so using your equation, moving "b" by one unit in a given frame will also move x(t) by one unit, causing it to feel a bit faster/more aggressive.
Your final equation will guarantee that you will end up at (a given closeness to) the target at a given time, wherever that target might be, and however fast it might be moving away.
Using the (poorly-named) approach in the article gives you a subtly different behavior, where the easing is essentially "reset" each frame. If you "run away" from the easing object at the right speed, you can delay it reaching you for an arbitrarily long amount of time, which has a different feel, and different practical consequences depending on the context. This is demonstrated in the "Follow the mouse" example in the article.
I believe this is actually an example of what's known as a "proportional derivative controller", which is effectively an underdamped spring. With a PD controller, the further the target points are apart, the faster the interpolation.
Lerp, on the other hand, is linear; the interpolation speed should not change:
Lerp(int A, int B, float t)
{ return (1.f-t)A + (tB); }
In the example implementation I present, it's assumed that t is pre-modulated by the framerate delta time, and within the bounds (0.f, 1.f)
You can easily make a PD controller out of a lerp, but (without a lot of gymnastics) you can't create a lerp out of a PDC.. at least as far as I know.
In PID terms, this is even simpler yet - just a proportional controller. Only the error between the target and current position are acted upon by a single (proportional) gain:
(targetPosition.x - position.x)0.2 == ekP
A derivative term would be applied to the difference between the previous and current error, usually in order temper the rate of change. D terms can be a proper headache in practice, especially if you have any noise in your system...
It’s funny, a while back I was working on a video game and starting writing a dumb AI driver for a semi-physical car. I started by doing the thing in the article, using the P term only, for the steering input to the car. It’s the same thing I’d done for follow-cameras in games with success.
Much to my surprise, steering a car in proportion to how far off track it is turns into an oscillation, and the more you damp the proportional term, the more unstable and divergent the oscillation gets! Well I’d had differential equations in college, but when someone told me that me PID controllers were the solution to my driving AI, I was very surprised I’d never heard of them before.
PID controllers are neat, they’re almost like a suspension shock with an adjustable spring that has damping and rebound controls, but unitless and you can plug them into anything.
To my mind, the lerp thing makes the direct interactions in the demos look slow, smooth, and imprecise instead of fast, sharp, and immediately correct.
Where lerp would be great is showing indirect results of some actions: a non-obvious effect shown as a slower animation may give a better idea of what is happening, when you were not tracking the value previously. The scrolling demo gives a glimpse of it, but it's still a bit too direct to benefit seriously.
Without context, the animation applied to a basic interaction seems like it's getting in the way of User Interaction
But as you hint, in many situations a gradual change of the presentation will allow a user a lot more awareness of the presentation and it's interconnections as a whole.
With scrolling, the idea is usually you're looking at a massive block of text with some images. Why are you scrolling?
Sometimes, you may know exactly where on the page you're trying to go -- in which case lerp will interfere with the interaction.
But in most cases, your using the scroll feature to skim and find the correct place, in which case an animated scroll will serve the user better since the content will have a predictable location throughout the presentation.
> To my mind, the lerp thing makes the direct interactions in the demos look slow, smooth, and imprecise instead of fast, sharp, and immediately correct.
I think the examples are made this way specifically for you to notice the effect. Animations can, and should be employed sparingly and made fast. However, there are quite a few places where adding just a little touch of animation helps immensely: buttons that react to the touch, slight dampening of the scroll when you jump to an anchor link (so that the eye adjusts to the new page position), state changes etc.
Personal gripe: Older versions of iOS were quite good with that, but then they've made a ton of animations, and made them slow.
Yes. The examples were clear, but seems like not the best use cases for this. My impression is that it was smoother but laggy and imprecise.
I wouldn't want it for an object I was directly controlling (what's needed is faster sampling rate), but likely great for less controlled objects. Or maybe just change the parameters to make it track more closely (adjustible)?
I wonder if I'm falling into the trap where a moderately interesting article with something clearly misleading about it makes me want to correct things. "Drives engagement" I suppose (:
But yeah, please don't call this lerping, as others have mentioned.
Real lerp is one of a a whole constellation of easing functions[1][2], and indeed playing with variations of "how does f(t) vary from 0–1 as t varies from 0–1" is super useful in all sorts of contexts.
But "get X% closer to the (moving) target every frame" is a more complicated thing. It is not just a f(t) easing function, even if the math overlaps.
In fact, this confusion has already had articles written about it[3] so I swear it's not just me!
I don't think the approach mentioned in TFA is wrong per-se. Situations vary, and it can be useful. But there is enough terminology confusion in CS already.
One downside of the approach: you never know _when_ you'll reach your target. Also, especially with the slider example, the feedback feels much more "sloppy." I much prefer the tighter-feeling simple version. But in some cases, like when an enemy is chasing you in a video game, it can be a useful simple-to-implement technique. Be sure to cap your min/max velocities, though.
Ironically, the person who posted this article on lerp also posted the article you linked to in [3], which calls out exactly the problem with this article. What a weird series of events..
That interpolation (which isn't a lerp) is framerate dependent. On slower devices, it will be exponentially slower, on faster devices it's going to be near-instant.
Anytime you see an interpolation that does not use a time-component, that interpolation will be (wildly) different on different devices.
It shouldn't matter if it's a fast desktop or slow mobile (unless it's not keeping up, which is unlikely in these examples); what matters is the display rate on these devices. The demos feel very fast in a 250Hz monitor when compared to a 60Hz one.
That's right. These demos are, as they should be, using requestAnimationFrame [1]. The docs have "Warning: Be sure to always use the first argument (or some other method for getting the current time) to calculate how much the animation will progress in a frame, otherwise the animation will run faster on high refresh rate screens."
Though there are cases like iOS cross-origin frames where it's throttled to 30fps [2] for power-saving reasons.
Sure, but this is a demo seemingly targeted at people who design interactions, not definitive documentation for front-end developers. In any animation environment with this much control, taking the tick delta or some equivalent into consideration is trivial-- a couple of lines at most. But adding implementation details irrelevant to the core mechanism unnecessarily increases the cognitive load for folks who don't parse code for a living.
But...the implementation is such that you can't take into account the number of ticks.
Every time you call it, it moves by the same amount.
This is how people did animations back when PCs were always the same speed, so you didn't need to worry about frame rate changes. And it's why they had to add a "turbo" button to the next generation of PCs...because many games weren't playable when the clock rate was nearly 2x the previous generation.
So it's an example that can't be used. The details aren't irrelevant. They're crucial.
So unless the goal is a fluff article that isn't actually useful for anything, it's deceptive. Run these animations on a 200FPS screen and they'll look very different than at 60FPS.
They're crucial for developers implementing this technique. They're irrelevant for designers that only need to know that this is possible and what it looks/feels like to the users.
Sorry, just my feeling but I hate when UI lags behind my input needlessly. I want immediate feedback; get out of my way. This is especially true on page scroll.
What? You don't like pushing a button and waiting for it to register that you pushed it but it doesn't so you push it again, but somehow it did register and something popped up in it's place a millisecond before you pushed it again, and now you just pushed something that you didn't intend to?
And Teams. My favourite feature: if you are previewing a file[1] in a "meeting" (or channel or chat or whatever it is) and you accidentally double click the "close" button, the "initiate a call with everyone in the channel right now" button is what will receive the second click.
1: itself a dreadful and laggy interface compared to just opening the file in a functional native viewer.
When a website is loading dynamically, and the UI is being reshaped on the fly with unpredictable delays, so you move your mouse to the widget you want to interact with and, by the time you actually do something with it, enough of the page has loaded that your interaction hits a completely different widget! Fun!
Yes, I use the Internet Archive's Firefox add-on, why do you ask?
Why is this so consistent throughout the web? Do companies not realize it's a horrible experience or do they just not care? I mean, you see it on major players, Youtube, Twitch, Amazon, Walmart. A lot of cases it seems underhanded like they're trying to get you to click something you didn't intend to but why the heck is this the internet we have today?
I've always felt a bit betrayed by the fact that Firefox lets web sites hijack the scroll bar in the first place. Why!? What happened to being the "user agent"?
The user agent developers realised that over time the relative amount of users that will bother to customise their user agent has been out-grown by the amount of users that won't to such a great degree that the number of people in the former group can now be considered a rounding error.
In other words, people in general just don't care. And therefore, neither will the software.
The point of these transitions is to tell user object's trail of movement so user aren't confused by the sudden change in position, but most people use them mindlessly in UI to make stuff feel unresponsive. For example lerping on scrolling is absolutely useless, because scrolling is directly triggered by user input, they would understand perfectly any Y movement of the viewport is caused by their input, the lerp will just leave impression of lagginess or even confusion. I'd argue UI rarely needs transitions, these are mostly useful in games and simulations.
The input on Fortnite's aiming system has two response curve settings Exponential and Linear. For some reason the game defaults to Exponential. Until I found that setting, I was CONSTANTLY fighting with the inputs. For years I'd go to aim and my reticle would eventually end up where I wanted it, but too late so I'd pull back to try to aim it where the person moved and repeat.
Changing it to linear (I'm assuming it's 1:1 with my input now, it feels that way) was a game changer! I don't understand why companies seem so bad at the most basic things. Just let me input, interpolation just adds lag and unresponsiveness.
My bet is the exponential movement is to avoid people getting motion sick. Jittery movement that's 1:1 with (say) mouse movement can, for some people, be the equivalent of being in a drink shaker and shifting around way too much.
Of course having nice dials and ways of configuring that is best overall, because everyone's different and there's also motion sickness to be had when things don't align with your input!
A canonical lerp is a linear interpolation. What this post describes is repeated linear interpolation towards a target with a fixed interpolation ratio, i.e. exponential averaging aka a 1 pole lowpass filter.
A more serious treatment of the subject would've touched upon this. It would also not call this "lerp" because that name is already taken for the general form.
For example, you can fix the laggy feel of a single exponential average by repeating it. This creates a two pole lowpass filter with a steeper transient.
Z transforms from DSP provide a more formal treatment of the subject.
Take this as a lesson - if you ever want criticism on HN, post something about messing with animations on a page. Scrolljacking is a cardinal sin around these parts.
On another note - is it true that this would mean the animation frame function will run infinitely (even though the render will not re-render under a certain threshold)?
The feedback to this article seems hyperbolically vicious. But frontend web dev always feels like a field with a lot of very sharp opinions about things.
In some contexts, yes using Lerp to go half way to a target can lead to running forever which can cause timing bugs. If you lerp halfway to a specific point each frame and trigger an event when you reach that point then it runs infinitely. In practice it just runs until the rounding error goes to zero, but the timing for this can be weird and it adds an unwanted delay. I once spent ~2 days changing all the animations in a program to switch from using this type of smoothing to using lerp on an animation curve with a definite time duration. Exponential filters are good for smoothing a continuous parameter, but they should not be used on a bounded interval where the end point matters.
This is not linear interpolation or a PD controller. This is an exponentially weighted moving average (AKA, a Kalman filter where your model is that the data is constant).
I was working on a problem with a friend, where I implemented a solution as a Kalman filter, with the models "TBD", just leaving them as constant. He implemented it in parallel as EWMA, and it was a bit of a shock to me when the behavior was identical. I did a little reading after, and there seems to be a closer relationship between the two filters than I previously had thought:
Not trying to be controversial, but I'm curious why you make the claim this is not a PD controller; I thought this is basically a canonical example of one.
Could you shed some light on the specific reason this does not qualify as a PD controller, and maybe what you'd change to make it into one?
A PD controller has a proportional term and a derivative term, and independent parameters for each (per axis).
This has one error term, and one parameter. If you think of the position as the variable being controlled, then it's just a "D" controller. If you think of the velocity as the variable being controlled, then it's just a "P" controller. But unless I'm experiencing some massive confusion, you can't think of it as a PD controller.
Edit: I think I can see why you might think this is a PD controller. You can see that there's no overshoot or ringing, so you think to yourself that there must be a damping term. But the difference is that there is no plant process here! We can just decrease the error. That's why you will never get overshoot or ringing. In summary, it's a one pole filter on an error function, not a controller.
I was thinking more in terms of behavioral characteristics, as your edit suggests.
In my mind, a strictly 'P' controller should produce an oscillation, which the example did not. I then surmised it 'hides' the 'D' part of the equation by modulating the error down each step. Strictly speaking, it seems like your summary is more accurate, and I'm still not quite sure if my way of looking at it is skewed, or completely broken.
If we're going to get technical: lerp just linearly interpolates two values, it doesn't even have anything to do with "positions" or "time" or anything else meaningful.
Call lerp on a linear interval, get a linear transition. Call it on a non-linear interval, get a non-linear transition. Both computed using linear interpolation. It's not lerp affecting the transition, it's how and when you call it.
So what you're saying is we can transform lerp, or linear interpolation, into an object for a framework. An abstract object that includes, namely, parameters or defining constraints like sampling rate and two singular values, for computing the realizable relationship among those three input values. What can we model with an understanding of a formalized lerp? With the caveat that it's best to model phenomena with lots of multidimensional data associated with observations of them. We're gonna need a bigger, more encompassing framework for understanding.
No, we don't, lerp is just lerp. If you want to do more complex things, go do more complex things, but that doesn't change what lerp is, and it doesn't turn it into "an object in a framework".
Please don't. This only looks pleasant on the first time you look at it. Now if you need to do hundreds of interactions per day, you want ugly, sharp and precise. Imagine if my text editor decided to "lerp" everything. I'd go insane.
Lerp is life. I built my whole game design toolkit around my meditations on Lerp.
The fundamental truth is that a float can disguise itself readily as a bool, but the opposite is not true. A bool cannot pull off the float's mustache.
A digraph(*) of candy coated floats can simulate the universe, forward and backwards. With rewind and seeking to any point.
Neat, one of the best tricks in the creadev toolbox! ;)
Shameless plug: https://github.com/titoasty/zmooth
I just published a JS library to easily make smooth interpolations, just by setting a "to" value (after doing it by hand for years)
I hope you'll find it useful!
Please don't make it easier for people to lerp scrolling and other things that shouldn't be lerped. It's annoying and unresponsive feeling when overused.
Well since it's a general library others can use it how they see fit. Discouraging someone for a small personal dislike like this is pretty bad form imo.
Check the comments, it's not a small personal dislike it seems pretty ubiquitous. Simply trying to make sure it's well known that there seems to be a consensus that lerping can be used incorrectly to the detriment of user experience. It'd be nice if libraries were on board with that too.
And, to be fair, self promotion in an HN comment thread about a topic is bad form too. I know you're not the OC but the point seems fair.
So you're implying I am responsible for the "bad" things people will make with this (unknown) library.
That I literally use for my everyday job for functionalities clients ask for.
I stated that I use it to make my life easier and so I just shared it.
"things that shouldn't be lerped" are a subjective/UX responsibility. As a creadev I try to propose good practices but I don't often have the final word.
(IMHO "lerp" scrolling is a truly horrible experience)
Yeah, the Le in Lerp stands for Linear... Interpolation, advancing some portion every frame is exactly not linear.
Also, using a fixed portion every frame means you are now framerate-dependant, slow computers will take longer to catch up than a fast computer at 240Hz or whatever.
This is a great way of fucking up the user experience and throwing the low performance in the face of the users, instead of trying to feel responsive.
I don't like this, personally. It increases my perception of "lag", and I prefer things on my computer not to lag. It might be nice for animation outside my control, but it should not affect my usage of any type of UI (be it touch or pointing device).
Oh these rubberbandy effects are just awful (at least in the two examples of scrolling and circle following the mouse) - they break the immediacy of the feedback mechanism between my actions and their effect
To be fair, it does go all the way to a PD controller and covers the basics along the way. I can see that as a quick talk. Also, the book in slide 2 is from the speaker himself, so the presentation must have been promotional as well.
Since I'm not very bright, I tried first example (ball following your mouse) and thought that's cool and very responsive and smooth animation! Then I tried Lerp example...
> I acknowledge the Gubbi Gubbi people, the Traditional Owners of the land and waterways where I live. I would like to pay my respects to Elders past, present and emerging.
What on earth has this got to do with linear interpolation? Odd punchline to a decent article.
"Please don't pick the most provocative thing in an article or post to complain about in the thread. Find something interesting to respond to instead."
I think that line is the part of Rach Smith's blog footer which is common for all his blogs. Her article ended at Thanks for reading. Anyway nice brief article
x'(t) = 0.2(b - x(t))
where b is the final position, x is the current position as a function of time, and t is time.
Solving this linear differential equation gives us:
x(t) = b + e^(-0.2t)
which up to a change in constants is what easings.net would call easeOutExpo [1]. So the usual way you would describe this "lerp" in computer graphics would be "exponential ease-out". This solved version of the ODE also gives us a function that isn't dependent on the framerate, so in practice the direct form with the exponential function is preferable for implementation (unless you're in a very constrained environment).
[1]: https://easings.net/#easeOutExpo