Wonderful bit of synchronicity- seeing this on the front page of HN right after I got back from being (deliberately) lost in the Fells reservation north of Cambridge. (In this case, of course, my algorithm was made simpler via access to a GPS oracle, when it started to get dark. :) )
> It was included in a list of 12 problems described by the mathematician Scott W. Williams as "million buck problems" because he believed that the techniques involved in their resolution will be worth at least a million dollars to mathematics.
Does that mean it would be worth that much via (perhaps indirect?) applications, or worth that much to mathematicians? What does it even mean to say that a non-applied mathematical theorem is “worth” some sum of money? (Absent a rich mathematician willing to pay a bounty, at least.)
> What does it even mean to say that a non-applied mathematical theorem is “worth” some sum of money?
The aggregate increase in lifetime earnings of the set of of maths grad students who earn their Ph.D off of a dissertation in the subfield built on the back of the knowledge gained from the solution to the problem?
I assumed it was a riff off of the millennium prize problems (which do have a bounty of $1M each), as a way of saying "here are some other very interesting unsolved problems"
I’m confused: how is “walk-straight ahead” not an obvious best solution? It’s minimum performance is null, average performance, half of the diameter of any (compact) shape, and worst, the diameter. Is the problem implicitly with infinite distance, multiple dimention or fractal?
Consider a forest that is 1000km long but only 1km wide, it is possible that "walk straight ahead" has a worst case of 1000km. But the strategy of "walk 1.5km, turn 90 degrees, walk 1.5km" has a worst case of 3km and is guaranteed to get you out of the forest (and I'm sure there are better strategies).
What happens for a 1 : 1.1 rectangle? it seems to me that that curve isn't optimal then. If so, what is the ratio for a rectangle where that curve becomes optimal.
That paper shows the value of visualization... I followed the Bellman Forest Problem link to the Wikipedia page on the Moser's Worm Problem[1] and it didn't do anything for me.
In Ward's paper, he summarized Moser's Worm Problem differently (p.7):
In 1966, Moser posed a related problem which roughly asks: “What's the best-shaped hammer for smashing one-inch worms?”
In particular, the problem is to find a path that minimises the worst case (ie, which path minimises the time to hit the forest border over all possible starting locations and orientations within the forest).
The shape and size of the forest is known. If you know that the forest is long and thin, you're better off walking in a curved path so that you're guaranteed to hit the "side" of the forest rather than walking straight and potentially continuing until you hit the "end".
What if the shape is the set of all finite curves arranged as a star and given equal length?
If you accidentally start in the middle, there will always be a worst case scenario route determined for any arbitrary curve and no way to pick between curves to find the best one.
Yes, and the solution for a circle of radius r is easy(going straight, you don't need to walk more than 2r; if you turn anywhere, you risk walking more than that).
Bellman put exact knowledge of the shape of the forest in the problem statement to make it nontrivial and therefore more difficult.
I'd discretize it into squares and get all four orientations. You are then looking for a set of directions (forward ,turn left, turn right) where forwards are minimized that works on every square in all four orientations.
We have grammars over the alphabet {F,L,R} where "escape" accepts a word. We want the smallest accepted word contained in all grammars (only counting Forwards).
Brute force enumeration works but is exponential.
Faster is taking every place/orientation grammar where the finite state automata is the grid labeled with transitions. Then recursively merge them in pairs like mergesort. The resulting big automata you then want the shortest path to an escape state (only counting Forwards).
Second optimization I would apply would be quadtrees. Start with huge squares subdivide them after solving at that level down to your specified tollerence.
If you need even more precision either tiled hexagon automata or subdivide into K instead of four compass orientations.
There is also probably some Pressburger arithmetic formulation treating it as vectors in 2D you could whack with an SMT solver? I'd have to sleep on that. Might be easier to code than all the finite state automata merging.
Here's a nice version to find a solution for, the half plane:
You're 1 km away from the (straight) edge of a forest (that extends infinitely in the other directions). What's the optimal path for you to walk to get out? (with the best worst case, ie the shortest path that is guaranteed to touch the edge.)
This was an interview question at investment banks back when brain teasers were still en vogue. There is a fairly evident solution, and then two improvements can be made.
EDIT to add: The solution was described by Isbell in 1957, and proved optimal by Joris in 1980. It's contained in the paper below, but don't deprive yourself of the pleasure of finding it yourself (and its length, if you're so inclined... :-)
Just to be sure, "path" must be understood as "trajectory" here? If the hiker doesn't know the starting position then it cannot be "path" as in "line between point A and point B" since there is no point A or B.
So the solution would be something like: "go ahead 10 meters, then turn 45° right, then go ahead 50 meters, then take a Bézier curve left with such parameters, etc"
> The best path is taken to be the one that minimizes the worst-case distance to travel before reaching the edge of the forest.
Again, just to be sure, the "worst-case distance to travel" is the longest distance achieved by the "trajectory" when you apply it starting from any point of the forest?
So basically (assuming you walk at constant speed):
1. Determine a trajectory to follow that guaranties that you would get out of the forest under N seconds (easy).
2. What is smallest possible N and the associated trajectory under which #1 is true? (hard)
This would be somewhat easier if the forest had 'holes' in it, right? But then it wouldn't be much of a (convex) forest, would it.. (I'm assuming the forest totally covers the area it occupies.)
It would be interesting to find the computational complexity of escape as a function of the geometry (a set of n vertices)
I'm starting to think that any sort of general solution would have to be extremely complex...
The optimum path would be something like a function of itself and the shape of the forest. That is, there is probably a general solution to the INITIAL 'optimum' path based on the forest shape, but then your optimum path changes at any nth 'step', because as you haven't found the border of the forest yet, you learn a bit more (at least probabilistically) about where you could be in the forest...
I think that makes sense, but I have no idea how a solution method like that could be presented analytically.
The solution to each forest is a curve with the 'start' and the 'end' point.
We minimize the length of the curve for that forest.
Constraint: if the starting point of the curve is placed anywhere in that specific forest then some point of it must be outside regardless of the orientation of the curve.
That's for avoiding thinking about the function of itself.
IIRC in practice you can exploit a regularity that's common in the real world, which is that "people live near flowing water". So, just identify the downhill direction and follow it to a stream/river, which will guide you to people.
A Russian, a German and Bellman were bragging about their football skills.
- I once kicked a football over a flag pole, the russian said
- What did you get for that?, the others asked
- Bronze.
- Well, I once kicked a football over a 4 story building, the German said
- What did you get for that feat?, the others asked.
- Silver.
Bellman was not so easily trumped, though
- I once kicked a brick over a skyscraper, he said.
- Wow! What did you get?! the others asked
- A broken foot.
First I thought that I would formulate the problem as finding a path that in every step minimizes the number of possible starting positions for the current travel length.
Then I thought about adversarial forest designer who would design a forest where following optimal path for lengths smaller than X produces solution that is not optimal overall. What kind of forest that would be? Do they exist?
I was a bit surprised it attempts to optimize worst case, not average case. I feel like average case is a more practical problem, and one which likely has some pretty nice results by thinking of the map as a probability distribution. Simply walk in the direction which narrowed down your possible location the most.
I disagree - I think if you described this to a layman they would always be more interested in optimising for the worst case. For example if I had an algorithm that on average will get me out in 1 mile but in the worst case might take me 100 miles then that's not very useful if I was trying to decide how much supplies to take.
The concept of an arithmetic mean is quite artificial - there's other ways you could define "average" such as the median.
Also, in my experience, optimising for the average case is often the easier problem to solve.
> The concept of an arithmetic mean is quite artificial - there's other ways you could define "average" such as the median.
I'd give some pushback on that. It is the only linear mean, and linearity matters.
More importantly though, it corresponds to the Expected Value. That is, the average of n samples of a distribution f converges to E[f]. In fact, I believe the arithmetic mean is the best unbiased estimator for the expected value of a distribution.
Now, expected values matter because they are the basis of robust decision-making.
You can consider mean, mode, and the other centrality measures as points that minimise some sort of average deviation. And depending on which norm you use to measure your deviation, you get different centrality measures:
Among these norms, again the L_2 norm stands out. It being the standard measure of distance. Here, it is important because it is invariant under rotation. We can define rotation without any appeal to length as the "special linear group". This consists of all matrices that have determinant 1.
I wonder where exactly the connection between 'rotation invariance' and 'estimate of the expected value' comes from.
What is the practical benefit of betting you landed in the worst spot? When computer scientists talk about Quicksort, we regularly talk about it being a O(N*LogN) operation. But by this logic, we should be treating it as N^2. That doesn't make sense, because in practical applications, we generally are dealing with random inputs, not adversarial inputs.
I'm not convinced what a lay-person is interested in solving matters. We're talking about Bellman, who is a mathematician and well known for attacking problems relevant to computer scientists. A lay person would just bring a compass and call it good.
Plus, it would depend on the probability measure over your initial position and orientation. It might seem obvious that you'd want it uniform over location and orientation, but you could argue that a more realistic measure would be obtained by eg picking a point on the boundary, then picking a direction, then looking for the next exit in that direction, and picking a point uniformly on that line (or exponentially with rejection if you end up outside), etc.
Wouldn't the most optimal strategy be to walk in an outwards expanding spiral? Even if it's the longest walking solution for simple examples, it would beat all these "infinite distance into one way" solutions. Ofcourse the rate of expansion is something else to think about.
Really? Even without knowing your orientation or location within the 2d shape? I doubt this solution is a general one. Apparently there's an abstract "bounty" of a million bucks [of value to be reaped by mathematics]. Likely means that solutions that grad students could execute within a day have already been tried.
>> Bellman's lost-in-a-forest problem is an unsolved minimization problem in geometry originated in 1955 by the American applied mathematician Richard E. Bellman[1].
As an aside, I've always found it funny how a person who studies (and, er, applies?) applied maths is an "applied" mathematician. That makes it sound like what is being applied is the mathematician, not her maths. I guess it's funny - haha to think of it that way, but it's also funny - strange that this is the most natural way to say the thing being said (that someone is a mathematician working in applied maths). In fact, I can't think of a different way to say the same thing at all, not in so few words.
I think we should first measure just how much more spherical cows can actually get when exposed to vacuum, in comparison to physicists, before proceeding any further.
"They are theoretically paid to do physics." can also mean someone is neither paid, nor doing physics, that they are being paid and doing physics, or could be giving permission for someone to do physics or refusing permission for someone to do physics, depending on context and emphasis.
I assumed it would be something ?x[person(x) & does.physics(x)] where '?' is like the existential operator, but doesn't guarantee existence in the current world of evaluation (but in at least some possible worlds).
The complexity of this problem is absurd. If the biker precisely know the terrain then he is capable of knowing where he is in that terrain. This is the basis of terrain-association.
> It was included in a list of 12 problems described by the mathematician Scott W. Williams as "million buck problems" because he believed that the techniques involved in their resolution will be worth at least a million dollars to mathematics.
Does that mean it would be worth that much via (perhaps indirect?) applications, or worth that much to mathematicians? What does it even mean to say that a non-applied mathematical theorem is “worth” some sum of money? (Absent a rich mathematician willing to pay a bounty, at least.)