Hacker News new | past | comments | ask | show | jobs | submit | gtr32x's comments login

The embedded mini-app ecosystem is a real model in China. Wechat being the most popular platform. Douyin also has a good share of it. You even see such ecosystem in payment apps like Alipay. I'm also certain there are special deals with Apple structured around this concept.


ya - i think the replication of the wechat super-app has been attempted so many times in the west at this point, but maybe agi coding solves this...?


the playbook works relatively well outside west hemisphere. grab, gojek, gozem, yassir scaled well beyond services to fintech.


Pretty sure there can be multiple "scans" per location is what they are saying


Using a massive context window is akin to onboarding a new employee before every mundane task performed. While a trained employee will take new task easily with existing context.

The trade off is simply cost. The cost remains in the LLM scene with regards to speed of execution and token cost.


I understand that the answer is no here. Because this method is only suitable for the class of linear system thats the equivalent of sparse matrices. GPUs on the other hand are more optimized for the general purpose matrix multiplication here. Unless it can be shown that there are certain economically high-usage scenarios of this class of problems (e.g. the usage magnitude of bitcoin mining), the investment into this specific research does not seem warranted.


Completely ridiculous.

Firstly, faster solutions in fundamental problems can eventually lead to hardware that supports it.

Secondly, this is already happening for sparse matrix multiplication: the nvidia A100 has some sparsity support, to allow making better use of pruned neutral networks, for example.

Thirdly, sparse enough systems, even without the A100, can run faster on cpu than gpu. If you find yourself with one of these problems, you can just choose the correct piece of hardware for the job. Without a sparse algorithm, you are still stuck with the slower dense solution.

Fourthly, giant sparse systems do indeed arise constantly. Just to make one up, consider weather measurements. Each row is a set of measurements from a specific weather station, but there are thousands of stations: it's a sparse set of observations, with some nearby dependencies. Evolving the state in time will often involve solving a giant linear system. (See other comments on the thread about pdes.)

It is absolutely worthwhile research, regardless of how applicable it is to fscking bitcoin.


If "free will" is only a subjective feeling and not an objective state


I believe the idea is that not all entities and institutions are eligible to borrow via the FED discount window, e.g. there is the primary credit system (primarily banks), and the second credit system (for some other low risk entities). Thus the repo market serves this segment of the market for liquidity.


Oh I see. I assumed the rates had spiked in the federal funds market, which also uses repos.


I'm curious how true this is, don't they have lock-up periods? Would love to get educated here if that's not the case or there are exceptions.


The lockup period is for existing shares that affect employees, founders, and prior investors. The new shares that are issued and then sold at the IPO offering price are able to be traded immediately and this is where the volume comes from.

In order to be a buyer of an IPO you need to have a tremendous amount of net worth, think 10’s of billions of dollars, you need to have a relationship with a bank, and you need to subscribe to all of the IPOs on the calendars, not just cherry pick the ones that you want.

The entire IPO process is really a very limited market place to a very select few buyers and these are typically very large endowments, mutual funds, and so forth.

In order to assure the company that they have buyers, they promise a return to those buyers in the price popping immediately after the IPO, otherwise the IPO pricing loses it’s allure.

In a Dutch auction, typically new shares aren’t created, and instead existing investors sell shares. This means that the company doesn’t get any of that cash on it’s balance sheets. This is done when the company is profitable, or has enough cash reserves to become profitable in the near future and the investors then want to get all of that pop by offering those shares directly.

Now for investors in a regular IPO, it’s also ok for investors, because while the company gets a bit less cash on its balance sheet, the investors shares are valued immediately on the public market and have the benefit of the pop. With the idea being that they will retain this higher value post the 6 month lock up.

However, you will already have two sets of quarterly results typically in that window in which case Wall Street will continue to evaluate the stock. If you can hit your projections you will be in good shape, however, if there are any misses, as was the case with Snap, all of that exuberance was tied to impossible numbers so the reset can be quite harsh.

And in public markets, that reset is instantaneous, because as soon as the news hits the market cap is immediately affected as shares are then traded on this news.


I think you are overselling how exclusive IPOs are... There are plenty of investors 'only' worth millions, not 10s of billions (??? that's like 100 individuals world wide...), that get offered IPOs by large brokerages as a benefit.


> In a Dutch auction, typically new shares aren’t created, and instead existing investors sell shares. This means that the company doesn’t get any of that cash on it’s balance sheets. This is done when the company is profitable, or has enough cash reserves to become profitable in the near future and the investors then want to get all of that pop by offering those shares directly.

This is wrong. Typically its still the company that sells shares. Infact Dutch auctions vs typical IPO's are orthogonal to who sells shares.

In both case the vast majority of the time its the company selling shares from its float.


I think GP was thinking of direct listing, like Spotify's, in which the company does not issue new shares and instead the stock is listed on a market and insiders put up their own sell orders to provide initial volume to the market. In this case, there is no lockup and founders, employees, and other insiders can get immediate liquidity.


No, when you buy shares in an IPO they are yours immediately. Otherwise where would the liquidity come from?


I've always been confused here on this, does that mean the 737 MAX has a high chance to have faulty reading from their AoA sensors? Or is that pretty much the industry standard right now? This has always seem like the actual culprit.


Sensors break. They get gunk on them. They ice over.

Generally a sensor with such a critical failure mode would be triple-redundant - if one fails, the discrepancy between sensors is flagged and the aircraft runs on the other two until the broken one is fixed. In this case, the aircraft had two sensors of the type (Angle-of-Attack), but MCAS was only listening to one of them.


How does this AoA sensor works? I'm guessing it has to sense the direction of the wind and then calculate the vector of the wings compared to that?

I've seen pictures of it ( https://aviation.stackexchange.com/questions/2317/how-does-a... ) and seems to be a simple mechanical component.

Why isn't this compounded with something that works by estimating the AoA from other factors? A few simple gravity based sensors would be able to tell the vector of the plane, and simply assuming that wind (airflow) is parallel to the ground would go a long way. Or is the vertical component of the local airflow so variable?


Why isn't this compounded with something that works by estimating the AoA from other factors?

Cost, probably. The 777 and 787 still only use two alpha vanes, but they calculate a synthetic angle-of-attack value as well.


When in flight, accelerometers only tell you what forces are being exerted by the wings and the engines, not which way is down. Consider, for example, the way the drinks in a cup don't spill when a plane turns.


I know that an airplane can make a roll that keeps the fluid in the cup, but when usual commercial planes fly, they don't do stunts, so usually there's no centrifugal force to mimic gravity. When a big jet points its nose down or up people and fluids feel it pretty much as if they were on the ground on a slanted flat surface as one of its edge is being raised or lowered.

Sure, this would probably worth next to nothing in turbulence, but in a simple take off and landing (where MCAS is already active and depends on AoA) it might help.

And of course I might be completely ignorant of most of the relevant problems with using any kind of gyroscopic or acceleration based sensor.


I think general relativity rules out the possibility of making a gravity detector that can distinguish it from acceleration.


No, but cost probably does. There have been a number of satellites flown (GOCE, GRACE, SLATS) with the sort of equipment you'd need. With that equipment, simply measure the strength of gravity in multiple parts of the plane. This gets you altitude, just as GPS or air pressure would, and then you can determine the angle of the plane.

An edit as response to the followup mentioning Einstein, due to HN throttle:

Yes, yes... and it doesn't matter for this purpose, because you can measure gravity at multiple points within the aircraft and because gravity falls off with distance.

https://en.wikipedia.org/wiki/Inverse-square_law#Gravitation

We have built equipment sensitive enough to measure this difference and we have flown it in satellites.


"Einstein’s ground-breaking realization (which he called “the happiest thought of my life”) was that gravity is in reality not a force at all, but is indistinguishable from, and in fact the same thing as, acceleration, an idea he called the “principle of equivalence”.

https://www.physicsoftheuniverse.com/topics_relativity_gravi...


Thanks for the elaboration. Could you help me further understand one more thing? When you say MCAS only listens to one, does that mean during the time when one AoA sensor fails? Or it always listens to one during normal operation?

Also, it does seem like Boeing dropped the ball here to not build further redundancy here.


MCAS alternates between the left and right sensor each time the plane lands. With the Lion Air flight I think cycling the electrical power for diagnostic work caused MCAS to pick up the faulty sensor for two flights in a row.


> MCAS alternates between the left and right sensor each time the plane lands

This sounds like spectacularly bad design that manages to extract negative value from having two sensors. What is the logic behind this?


Pilots and first officers tend to switch who has flight control on each leg of their flights (which is the Pilot Flying vs Pilot Assisting), and the MCAS system uses the AoA vane associated with the side of the cockpit that currently has flight control.

There is no good reason for only listening to one sensor.

There is a sort of good reason for having a split between pilot/copilot side: the instruments are redundant (both physically and electrically), so in the event of malfunction you can failover to the other side.


That actually sounds awful, sorry for my naivety if this is just industry standard. But for such a mission critical piece to have no redundancy build over it is just poor. Especially that it's prone to failure since it's situated on the outside of the plane.

It just seems to be that this is some terrible engineering done on Boeing's end of not fully understanding the critical situation here.

Generally two failures: 1. a lack of redundancy in a mission critical sensor 2. a blind trust on MCAS's priority over pilots


a lack of redundancy in a mission critical sensor

There is redundancy in the sensors, but the sensors are not being used in a redundant manner. There are whispers that the 767 fuel tanker (KC-46/KC-767) has a system similar to MCAS that will look at both alpha vanes for disagreement, which is a bit damning to say the least.

a blind trust on MCAS's priority over pilots

The entire purpose of MCAS is to engage only when the pilot is flying to prevent the pilot from doing something dangerous. Previous generations of 737 had the same problem but the MAX is more delicate and compounds it with nacelles that generate lift.


Part of the problem was that MCAS was originally designed with very little control authority, and so wasn't considered safety-critical. However, during testing they realized they needed to up the gain, and made pretty major retuning without reexamining their safety assumptions.

Plus the bug with the resets on the limiter.


I believe the MCAS uses the pilot-side sensor.


Boeing dropped the ball in many, many ways.


The lack of redundancy by default coupled with extreme profit seeking by Boeing for the 'upgrade' is inexcusable. TL;DR They shit out a faulty product hundreds died as a result


> Or is that pretty much the industry standard right now

Half the industry uses three or four AoA sensors with majority voting.

The other half ( Boeing ) uses two.


Even redundancy doesn't really solve the issue. The sensors are out in the same environmental conditions, so will likely all fail at once, for example a bad pattern of water followed by cold causing icing.

Instead, the sensors should detect failure, for example by using a motor to detect if the vane is stuck and cannot turn freely.

The flight controls should also be able to fly even if all sensors of a certain type have failled. Angle of attack for example can be approximated with an accelerometer and gyro well enough to keep the plane in the air.


>Angle of attack for example can be approximated with an accelerometer and gyro

I'm prepared to be wrong but that sounds impossible to me. A steady-state descending stall is inertially indistinguishable from cruise.


You're right - you need to combine with altimeter or GPS, and for more accuracy you can also combine with wind direction forecasts or airspeed measurements.

The point is, there are lots of data sources, and with even a subset of them, it's possible to fly the plane to a safe landing.


But then the motor could break, and so it goes. The more moving parts around you have, the more failure modes you have.


If the motor breaks, the sensor has failed.

Failure isn't really the problem - it's silent failure which is deadly.


Reading both of the posts make me believe that Sutton has stated a more global outlook in the progression of complexity than Brooks did, or that Brooks is simply trying to continue to encourage the current generation of AI research.

My naive take of each of their arguments, which are seemingly obvious but nonetheless profound:

Sutton: advancement in computation capacity > specifically devised methods

Brooks: building specific tools help in solving the problem

You see, neither of them are wrong. However, what Brooks is arguing for is essentially - hey, we invented paper, but we have no computer yet, let's make some line paper and graph paper to increase our productivity, hooray! Then what Sutton is saying is, dude, show me how your method will continue to be productive when computers are invented.

I do also want to propose my takeaway from these pieces though. From Brooks I take that building tools/methods is essential to local optimization and tools/methods can be extended to fit new global advancements. And to Sutton's point, we are in a state of ever progression by the extension of the essence of Moore's Law.


Interestingly I was just thinking about this the other day, and hoping that someone here would have the answer. I'm really curious what is the total bandwidth per second across different continently globally without accounting for intra-continent bandwidth?


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: