The point is not germane. What is germane is that a car that supposedly uses LIDAR and infrared, and presumably was approved by the regulators on the basis of such, should have had no problem seeing the pedestrian as LIDAR and infrared are unaffected by night and at least shown some indication of braking but did not. This suggests that the car does not in fact utilize any of those fancy (non-vis) detection methods. Alternatively, these fancy detection methods were fooled by the bicycle and thus misclassified as an error or something.
My point is that it's likely the camera view we're seeing has nothing to do with the self driving portion of the vehicle (a good hint to this is the interior view--useless to autonomous driving, but a common feature of dash cams).
The car has a LIDAR sensor mounted on the roof. It is supposed to continuously scan 360° of the environment. Since LIDAR is an active sensor (it emits light), the car should have seen the person and bicycle even in the dark. That it did not do so suggests the car does not evaluate LIDAR input, or it dismissed the object as erroneous data.
It has 64 lasers, spread out over about 27 degrees - about 0.4 degrees per laser, from almost horizontal to an angle of 24 degrees or so down. Now take a look at where it is mounted on the car, and envision these laser beams spreading out and being spun in a circular conical area around the car.
Now - if you think about it - as the distance from the sensor increases, the beams are spread further apart. I'd be willing to bet that at about 200 feet or so away from the car, very few of the beams would hit a person and reflect back. Also - take a look at the reflectance data in the spec. Not bad...but imagine you are wearing a fuzzy black jacket on your top half. How much reflectance now?
What do you think the point cloud returned to the car is going to look like? Will it look like a human? Hard to say - but you feed that into a classifier algorithm, there's a possibility that it's not going to identify the blob as a "human" to slow down. Especially when you add some bags, a strange gait, plus the bicycle behind the person. All of this uncertainty adds up.
I am also willing to bet that only the LIDAR was used for collision detection (beyond the radar on the unit). Any cameras - even IR based - would likely only be used for lane keeping and following purposes, plus traffic sign identification. Maybe even "rear view of vehicle" detection. Ideally it would be used for "person/animal" identification and classification to - but again, given the camera sensor, and who knows what the IR sensor saw or didn't see, along with the weird lighting conditions - well, who knows how it would have classified that mix?
Lots of variables here - lots of "ifs" too. All we can do is speculate, because we don't have the raw data. Uber would do well to release the entire raw dataset from all the sensors to the community and others to look over and learn from.
Finally - I am not an expert on any of this; my only "qualifications" on this subject is having taken and passed a couple of Udacity MOOCs - specifically the "Self-Driving Car Engineer Nanodegree" program (2016-2017), and their "CS373" course (2012). Both courses were very enlightening and educational, but could only really be considered an introduction to this kind of tech.