I'm confused by both this blog post, and the reception on HN. They... didn't actually train the model. This is an announcement of a plan! They don't actually know if it'll even work. They announced that they "trained over 50 million neural networks," but not that they've trained this neural network: the other networks appear to just have been things they were doing anyway (i.e. the "Virtual Positioning Systems"). They tout huge parameter counts ("over 150 trillion"), but that appears to be the sum of the parameters of the 50 million models they've previously trained, which implies each model had an average of... 3MM parameters. Not exactly groundbreaking scale. You could train one a single consumer GPU.
This is a vision document, presumably intended to position Niantic as an AI company (and thus worthy of being showered with funding), instead of a mobile gaming company, mainly on the merit of the data they've collected rather than their prowess at training large models.
“Concepts of a plan” is often enough to make people think you know what you’re doing. Think most people, here included, got the impression that they had succeeded already.
And I get that; one thing that (I think) especially software developers have is a high level knowledge of many different subjects, to the point where IF they ever have to do something in practice, they'll know enough to figure it out. T-shaped people kinda thing.
They have never been a mobile game company and they have said as much themselves on many occasions. They're a data harvesting company. Guess now they're trying to figure out what to do with all of that data.
It looks pretty cool. I imagine it could be a game changer in wearable devices that want to use position like AR.
Intelligence gathering is also another one. Being able to tell where someone is based on a Picture is a huge one. Not just limited to outdoors but presumably indoors as well. Crazy stuff
They have a "VPS" which extracts keypoints from an image and matches them against a 3d pointcloud. Using trigonometry you can work out the 3d position of the camera by matching the keypoints from the image to the keypoints in the point cloud.
What is different is that they are proposing to make a large ML model to do all of the matching, rather than having a database and some matching algorithm.
Will it work? probably, will it scale? I'm not that hopeful, but then I was wrong about LLMs.
Over the past five years, Niantic has focused on building our Visual Positioning System (VPS), which uses a single image from a phone to determine its position and orientation using a 3D map built from people scanning interesting locations in our games and Scaniverse.
This is pretty cool, but I feel as a pokehunter (Pokemon Go player), I have been tricked into working to contribute training data so that they can profit off my labor. How? They consistently incentivize you to scan pokestops (physical locations) through "research tasks" and give you some useful items as rewards. The effort is usually much more significant than what you get in return, so I have stopped doing it. It's not very convenient to take a video around the object or location in question. If they release the model and weights, though, I will feel I contributed to the greater good.
Unsarcastically, a lot of people believe user data belongs to users, and that they should have a say in how it's used. Here, I think the point is that Niantic decided they could use the data this way and weren't transparent about it until it was already done. I'm sure I would be in the minority, but I would never have played - or never have done certain things like the research tasks - had I known I was training an AI model.
I'm sure the Po:Go EULA that no one reads has blanket grants saying "you agree that we can do whatever we want," so I can't complain too hard, but still disappointed I spent any time in that game.
> Nothing in our society operates in a way that might imply this.
I beg your pardon?
Consider just about any physical belonging — say, a book. When I buy a book, it belongs to me. When I read a book in my home, I expect it to be a private experience (nobody data-mining my eyeball movements, for example).
This applies to all sorts of things. Even electronic things — if I put some files on a USB stick I expect them to be "mine" and used as I please, not uploaded to the cloud behind my back, or similar.
And if we're just limiting ourselves to what we do in public (eg: collecting pokemon or whatever), it's still normal, I think, to interact relatively anonymously with the world. You don't expect people to remember you after meeting them once, for example.
In summary, I'd say that "things in our society" very much include people (and their tendency to forget or not care about you), and physical non-smart objects. Smart phones and devices that do track your every move and do remember everything are the exception, not the rule.
Before smart phones or the rise of the internet your information was mined by credit agencies for use by banks, employers and other forms of credit lending.
Credit cards and Banks sold your data to third parties for marketing purposes.
Payroll companies like ADP also shared your data with the credit agencies.
This is not a new phenomenon and has been the currency of a number of industries for a while.
The only thing that has changed is the types of data collected. Personally, I think these older forms of data collection are quite a bit more insidious than some of the data mining done by a game like Niantic for some ml model.
I have a lot more control over and less insidious consequences from these types of data collection. I can avoid the game or service if I like. There isn't much I can do to prevent a credit agency from collecting my data.
> This applies to all sorts of things. Even electronic things — if I put some files on a USB stick I expect them to be "mine" and used as I please, not uploaded to the cloud behind my back, or similar.
Every app you open on Mac sends a "ping" to Apples servers.
> I have done some preliminary tests: with a script (a small program) that standalone runs in 0.4 seconds, the extra network requests that Apple performs are taking that number to 6 seconds in average, and in some cases when my wifi is slow, 70 seconds.
I just do not believe that. It sounds like a bug in a beta release. I'm sure I would have noticed if every ls I run took 6 seconds, and I'm sure many others would have too. Heck, I've used a Mac with the network turned off and it sure doesn't just refuse to run everything.
> Consider just about any physical belonging — say, a book. When I buy a book, it belongs to me. When I read a book in my home, I expect it to be a private experience (nobody data-mining my eyeball movements, for example).
Perhaps this is just my own brain's degradation, but how far removed from society do you need to be to expect your purchases to not be sold to the highest bidder? This practice is certainly older than I am.
Forgive me if I cannot conceive of a consumer who has completely tuned out the last forty years of discourse about consumer protection. Hell, the credit bureaus themselves contradict the concept of consumer privacy.
> Perhaps this is just my own brain's degradation, but how far removed from society do you need to be to expect your purchases to not be sold to the highest bidder? This practice is certainly older than I am.
It depends quite a bit on how you make your purchases.
If your purchases are on a credit card, with a loyalty ("tracking") card or App(TM) involved in the purchase? They're absolutely being sold to... well, probably not the highest bidder, but "all bidders with a valid payment account on file."
If you make a habit of paying cash for things and not using Apps or loyalty cards, and don't have your pocket beacon blaring loudly away on a range of radio frequencies when you shop, I expect a lot less data sales. It's a bit of a transition if you're used to credit cards, but once you're used to it, it's not bad at all, and involves a lot less data collection. I don't mind if the local barista or bartender knows me and my preferences, but I do mind if their POS system is uploading that data continuously.
Perhaps my main objection is that you said "Nothing in our society X" rather than "many things in our society Y."
I was just providing some counter-examples to show that there's more than nothing at play, here.
Certainly there are oodles of examples of our data being sold behind our backs, even well before 40 years ago. But there are also oodles of examples of the opposite.
You find it strange that people want something different than the wild west status quo (which is not the status quo everywhere, btw) that they may not even fully understand or be informed enough to understand how it works or what the consequences are? like you actually expect even a savvy user of this game to be like ‘oh, of course they would be using my labor to profit for this technology i dont understand, duh?’ what a strange statement and world view.
Wanting something to be a certain way is very different from believing that it is. And yes, I would expect any moderately informed and technically savvy user to assume that the company is doing anything they possibly can to profit off of user data.
Media just buries people in bad examples, and they don't notice the rest of the world. If you read about someone driving over 5 grannies, but still don't follow that example, certainly you can't say that “everyone is doing it”.
Despite what success fantasies and other self-help garbage teach people, a lot of society — most of it, actually — does not work on greed. That you can ignore thinking about it is itself a statement about deep foundations under the shallow bling.
Off the top of my head I think GDPR in the EU might have something to say about this. I don't know if those protections exist anywhere else or not.
In the US, people get very upset about things like traffic cameras, and public surveillance in general. Those are usually data-for-punishment vs. data-for-profit (...maybe?), but people here resist things like data recorders in their cars to lower car insurance.
At least to me, being unhappy about Niantic's behavior here does not seem the least bit unusual.
> In the US, people get very upset about things like traffic cameras, and public surveillance in general.
People get upset about a lot of things in the US. In fact—for some unknown reason we consider it a form of political activity to get upset over things. However, there is not any political party trying to court voters by advocating for dismantling the intelligence state.
> I can understand that people believe this, but why do they do? Nothing in our society operates in a way that might imply this.
Sure, but that disconnect between what people think and how things work is almost fully general over all subjects.
I've seen people (behave as if they) think translation is just the words, but that leads to "hydraulic ram" becoming "water sheep". People who want antibiotics for viral infections, or who refuse vaccines (covid and other) claiming they're "untested" or have "side effects" while promoting alternatives that both failed testing and have known side effects. I've seen people speak as if government taxation only exists because the guy in charge of taxes is, personally, greedy. I've heard anecdotes of people saying that you can get people to follow the rules by saying "first rule is to always follow the rules" and directly seen people talk as if banning something is sufficient to make it stop.
The idea that it's even possible to do make a model like this from the user data, is probably mind-blowing to a lot of people.
The naïve assumption most people seem to have is that computers do only what they, personally as end-users, tell them to do, and that they're as slow as the ad-riddled web front-end with needlessly slow transition animations placed there to keep user engagement high — rather than the truth, that software primarily does what the operator of the service wants it to do, and that it's absolutely possible for a home PC[0] to hold and query a database of all 8 billion people on the planet and the two trillion or so different personal relationships between them.
When GenAI images were new, some of the artists communities said "That content generated can reference hundreds, even thousands of pieces of work from other artists to create derivative images"[1], rather than millions of images, because the scale of computer performance is far beyond the comprehension of the average person. The fact that the average single image contributes so little to any given model that it can't even represent its own filename, even moreso.
And so it is with stuff like this: what can be done, cannot be comprehended by the people who, theoretically, gave consent that their data be used in that way.
[0] Of course, these days most people don't have home PCs; phone, perhaps a tablet, they may have a small low performance media server if they're fancy, but what we here would think of as a PC is to all that as a Ferrari etc. is to a Honda Civic.
>>>>> I have been tricked into working to contribute training data so that they can profit off my labor.
> Unsarcastically, a lot of people believe user data belongs to users, and that they should have a say in how it's used.
At some point this stops being a fair complaint, though. Most of the AI-related cases IMO are such.
To put it bluntly: expecting to be compensated for anything that can be framed as one's labor is such an extreme level of greed that even Scrooge McDuck would be ashamed of. In fact, trying to capture all value one generates, is at the root of most if not all underhanded or downright immoral business practices in companies both large and small.
The way society works best, is when people stop trying to catch all the value they generate. That surplus is what others can use to contribute to the whole, and then you can use some of their uncaptured value, and so on. That's how symbiotic relationships form; that's how ecosystems work.
> I'm sure I would be in the minority, but I would never have played - or never have done certain things like the research tasks - had I known I was training an AI model.
I have a feeling you wouldn't be in minority here, at least not among people with any kind of view on this.
Still, with AI stuff, anyone's fair share is $0, because that's how much anyone's data is worth on the margin.
It's also deeply ironic that nobody cares when people's data is being used to screw them over directly - such as profiling or targeting ads; but the moment someone figures out how to monetize this data in a way that doesn't screw over the source, suddenly everyone is up in arms, because they aren't getting their "fair share".
Depend on normal users' feelings, I'm sure when I play Switch, they won't sell my data. But when people use Google's service, this is the default setting .
The normal business model for free to play games is that a small number of people pay a lot of money for cosmetics or convenience, this finances the game and is how the company makes its money. The free players then provide value by being there making the game feel alive and being someone, the spenders can show off their cool items to.
That is how monetization for free to play games have worked for a very long time now. Changing that without letting people know up front is absolutely a betrayal of trust.
> They charge for gems and this model is well understood to make a fortune without selling user data at all
I don't understand what this has to do with the topic at hand. Are you suggesting that people can't conceive of the sale of their data because they can conceive of whales amortizing the cost of their video games? That seems contradictory in your estimation of people's ability to grasp the world.
"How did you imagine they were making money without pimping your data?"
I imagined they were making money in the big obvious way they make money!
I can conceive of them selling user data, but it's not their core business model, and they would operate basically the same if they couldn't sell user data. It was never some obvious thing that they would do this.
I'm not a fan of the way you moved the goal posts here. You argued that Niantic would obviously use user data to fund game operations. Then we see that they don't actually need to do that, and that the game could fund itself. Then you argue that well, we shouldn't assume that they wouldn't try to monetize user data, shame on us. I agree that those who know how tech companies operate should be extremely pessimistic as to how users are treated, but I don't think that pessimism has permeated the public consciousness to quite the level you think it has. Moreover, I don't think it's a failing on the part of the user to assume that a company would do something in their best interest. It's a failing of the company to treat users as commodities whose only value is to be sold.
But some numbers pusher somewhere saw an opportunity to make even more money and write good quarter number, padding themselves on the shoulder for a job welll done, without ever wasting a thought about any such unimportant thing as ethical implications...
Google actually has released weights for some of their models, but judging by the fact that this model is potentially valuable, they likely will not allow Niantic for this
> Google actually has released weights for some of their models, but judging by the fact that this model is potentially valuable, they likely will not allow Niantic for this
which is totally unfair, every niantic player should have access to all the stuff because they collectively made it
> which is totally unfair, every niantic player should have access to all the stuff because they collectively made it
I don't understand this perspective. While all players may have collectively made this model possible, no individual player could make a model like it based on their contributions alone.
Since no single player could replicate this outcome based on only their data, does it not imply that there's value created from collecting (and incentivizing collection of) the data, and subsequently processing it to create something?
It actually seems more unfair to demand the collective result for yourself, when your own individual input is itself insufficient to have created it in the first place.
I don't think producers of data are inherently entitled to all products produced from said data.
Is a farmer entitled to the entirety of your work output because you ate a vegetable grown on their farm?
“Is a farmer entitled to the entirety of your work output because you ate a vegetable grown on their farm?”
Bad analogy. I pay a farmer (directly or indirectly) for the vegetable. It’s a simple, understood, transaction. These players were generally unaware that they were gathering data for Niantic in this way.
If data is crowdsourced it should belong to the crowd.
Niantic pays you for the data you collect, as well. It might pay you with in-game rewards, but if you accept those rewards, this is, as you put it, "a simple, understood transaction".
The farmers you buy the vegetables from are also generally unaware of how you use them, too!
I fail to see how you're differentiating the analogy from the original example.
Most of your analysis is flawed because the model is non-rivalrous so it could easily be given to every player.
Additionally, many people can contribute to make something greater that benefits everyone (see open source). So the argument of “you couldn’t have done this on your own” also doesn’t hold any water.
The only thing that protects niantic is just a shitty ToS like the rest of the games that nobody pays attention to. There is nothing fundamentally “right” about what they did.
> Most of your analysis is flawed because the model is non-rivalrous so it could easily be given to every player.
Sure, copying it is approximately free. But using it provides value, and sharing the model dilutes the value of its usage. The fact that it's free to copy doesn't mean it's free to share. The value of the copy that Niantic uses will be diluted by every copy they make and share with someone else.
> Additionally, many people can contribute to make something greater that benefits everyone (see open source). So the argument of “you couldn’t have done this on your own” also doesn’t hold any water.
Your second sentence does not logically follow from the first. In fact, your first sentence is an excellent example of the point I was making: many people contribute to open-source projects, and the value of the vast majority of those contributions on their own do not amount to the sum total value of the projects they've contributed to. This is what I meant by "your own individual input is itself insufficient to have created it in the first place". Sure, many people contribute to open source projects to make them what they are, but in the vast majority of cases, any individual contributor on their own would be unable to create those same projects.
To rephrase your first sentence: the value of the whole is greater than the value of the parts. There is value in putting all the pieces together in the right way, and that value should rightfully be allocated to those who did the synthesis, not to those who contributed the parts.
Is a canvas-maker entitled to every painting produced on one of their canvasses? Without the canvas the painting would not exist--but merely producing the canvas does not make it into a painting. The value is added by the artist, not the canvas-maker--therefore the value for the produced art should mostly go to the artist, not the canvas-maker. The canvas maker is compensated for the value of the canvas itself (which isn't much), and is entitled to nothing beyond.
> The only thing that protects niantic is just a shitty ToS like the rest of the games that nobody pays attention to. There is nothing fundamentally “right” about what they did.
There's also nothing fundamentally wrong about it, either, which was my point. Well, my point was actually that it's even more shitty to demand the sum total of the output when you only contributed a tiny slice of the input.
You’re getting really confused here. Nobody is arguing about stuff being worth more than the sum of its parts. That’s obvious to everyone who has watched literally anything useful being constructed out of materials.
You using that as some kind of support for Niantic’s actions doesn’t make any sense.
> There's also nothing fundamentally wrong about it, either, which was my point.
What you’re ignoring is the reality of people getting angry when they contribute something under a premise and then it gets used for something else. When I contribute to a charity that is supposed to build water supply systems and they decide to build pipe bombs instead, I’m gonna be pretty pissed off.
> Well, my point was actually that it's even more shitty to demand the sum total of the output when you only contributed a tiny slice of the input.
The collective that produced literally all of the input can ask for the model and then easily copy it to each member. If a single person produced all of the input and then requested this, how much does your argument change? Because these scenarios are equivalent when the product isn’t rivalrous.
More generally, you’re still not grokking non-rivalrous goods. A good isn’t non-rivalrous just because artificially constraining it and selling access to it can make it profitable. This confusion has led you multiple times to comparing this model to physical goods.
> You’re getting really confused here. Nobody is arguing about stuff being worth more than the sum of its parts. That’s obvious to everyone who has watched literally anything useful being constructed out of materials.
I really don't think I'm getting confused here. This is what you said: "Additionally, many people can contribute to make something greater that benefits everyone (see open source)". That sounds to me like "many people contribute to make a thing whose value is greater than the value of the inputs" aka "the whole is greater than the sum of its parts".
Regardless, your second sentence was still unsupported by that, because I can point to literally any open-source project and prove that no one contributor to that project could have created the project that exists today. Sure, there are projects where 80-90% of the project is written by one person, and even the rare case where an entire project is written by a single individual, but those are rare cases, and not the norm. The statement that "no one individual could recreate these projects on their own" is still accurate far more often than it's not. Finding a single counter-example doesn't prove the point, because your counterexample is the vast minority case.
We know for a fact in the case of Niantic's data gathering that no one individual could have made this model. There are many reasons, but the easiest to illustrate is the number of man-hours required to collect the input data.
> What you’re ignoring is the reality of people getting angry when they contribute something under a premise and then it gets used for something else. When I contribute to a charity that is supposed to build water supply systems and they decide to build pipe bombs instead, I’m gonna be pretty pissed off.
I'm not ignoring that reality, I'm just saying those people aren't justified in their anger. They can be angry all they want, but anger does not justify feeling entitled to something to which you really aren't. In the case of Niantic's data collection, they opted into this and agreed to collect the data on behalf of Niantic, without even asking what the data was to be used for. When it turns out that it's in purpose of something to make Niantic money (you know, to make up for the fact that you're playing their game for free), they really have no standing. To be clear, they're free to be angry and free to feel "cheated" in some way, but a) they haven't been cheated, and b) their ignorance is their fault and no one else's.
> The collective that produced literally all of the input can ask for the model and then easily copy it to each member. If a single person produced all of the input and then requested this, how much does your argument change? Because these scenarios are equivalent when the product isn’t rivalrous.
If a single person produced all the input and then requested it, I'd probably say they deserve a copy. However, no single individual can have produced all the input here, so the point is moot. There also is no "collective that produced literally all of the input", so that point is moot, as well. You would never be able to get every person's explicit consent to demand a copy of the model on behalf of "everyone", if not for the simple fact that the vast majority of those people simply don't give a shit. They'd never use the model or do anything constructive with it, so why bother with having a copy?
Neither of these examples are realistic, and so my argument doesn't change. I try to keep my arguments grounded in reality, not in hypotheticals.
And again, giving a copy to each member isn't free, even though copying it might be. I'll just quote myself again:
> Sure, copying it is approximately free. But using it provides value, and sharing the model dilutes the value of its usage. The fact that it's free to copy doesn't mean it's free to share. The value of the copy that Niantic uses will be diluted by every copy they make and share with someone else. [...] There is value in putting all the pieces together in the right way, and that value should rightfully be allocated to those who did the synthesis, not to those who contributed the parts.
-
> More generally, you’re still not grokking non-rivalrous goods. A good isn’t non-rivalrous just because artificially constraining it and selling access to it can make it profitable. This confusion has led you multiple times to comparing this model to physical goods.
No, I grok non-rivalrous goods pretty well. I just think they're largely imaginary and only apply to a very small slice of non-physical goods. Niantic is building this model to make money from it. This means they believe the model will provide value to other users, who will pay them for the use of that model. Anyone else who obtains a copy of this model could use it in the same way, and obtain some of that market share for themselves. This means providing services built on this model is inherently rivalrous, which removes the entire basis of your argument. Even if this leads to lower prices for the end users (the ideal case), there is still direct competition (i.e. rivalry!) between all owners of the model.
People who think like this and want to profit off you with KPIs is why players should always maliciously comply with data grabs. Spend the 30 seconds activating the accelerometer and doing sweeps of your shoes and full finger covers of the surroundings to get those poffins and rare candies. It's gross that lately they want to give me 10 pokeballs now instead.
If some small number maliciously comply like this, it will make the model better, not worse.
This is also wildly antisocial behavior, and if everyone behaved like this, the world would be a really shit place. I know many people have a genuine "fuck you, I got mine" attitude, but if everyone had it, the world would be infinitely worse off.
If you don't like the terms of the game, don't play it? Why does dislike of the terms merit what essentially amounts to cheating (under the spirit of the rules, if not the letter)? This attitude makes even less sense than the one I was originally critiquing
What you say is fair but if an individual's data doesn't matter, what happens when they ask to have their data deleted under GDPR.
is there a way to demux their data from existing models?
While your example isn't exactly coherent (I don't think GDPR would cover photos/videos taken by the user, unless maybe the user was in the photo/video?), presumably they could just train the model again without that user's data. I doubt the end result would be that much different
> Is a farmer entitled to the entirety of your work output because you ate a vegetable grown on their farm?
This is more like paying the farmhands.
If we're looking at my work output, eh, everyone that works on a copyrighted thing gets a personal license to it? That sounds like it would work out okay.
> I don't think producers of data are inherently entitled to all products produced from said data.
It depends on how directly the data is tied to the output. This seems pretty direct.
Niantic was clear about the product of the labor: In exchange for swiping the PokeStop, you'd get the rewards. No one was ever told they'd get more than that, and no one had any reasonable expectation that they'd get more.
Exactly! Everyone thought that the exchange was them doing something in the game, and Niantic was giving them the rewards in the game, and no one had any reasonable expectation that Niantic would get more outside of the game. (After all, neither Blizzard or Square get anything when one completes quest objectives in their MMO.)
So obviously, now that Niantic is getting things outside the game its reasonable the people who did the work ask for something from that.
> So obviously, now that Niantic is getting things outside the game its reasonable the people who did the work ask for something from that.
Absolutely not.
If you are compensated for doing something, you can’t suddenly come back for more 5 years later because it was used as part of something bigger which is now making money.
I have little sympathy for the players here. If you are voluntarily doing free work for worthless virtual things, you can’t come complaining when it dawns on you that it might have been dumb from the start (and to be fair maybe it wasn’t and they did it because it was fun which is completely ok).
I guess we could ban in game shop and game reward for real work as they are somehow predatory but that would be a bit paternalistic.
Can you name any other agreement where it's considered reasonable to renegotiate the terms afterwards because you found out what the other party got was more profitable for them than you'd been aware of, through no misrepresentation on their part?
For people who've dealt with children a lot, sure. But making an exchange and then expecting a cut of the other side's profits on top of what you exchanged for is possibly the definition of unreasonable expectations.
> I don't think this is very difficult to sort out: people feel entitled to the products of their labor.
What labor, though? They took a few pictures and videos (hell, they probably still have a copy of them, so giving a copy to Niantic is essentially free), and were generally compensated for doing that (through in-game rewards, but compensated nonetheless).
The "labor" that transformed the many players' many bits of data was done by Niantic, and thus I would argue that Niantic is the rightful beneficiary of any value that could not be generated by any individual player. To my earlier point, every player could retain a copy of every photo/video they submitted to Niantic, and still be unable to produce this model from it.
> This is comparing apples and oranges: presumably the consumer didn't do anything to produce the vegetable. Hell if anything, under this analogy niantic would owe users a portion of their profits.
The players are also compensated for their submissions, are they not? It doesn't matter that it's not with "real money", in-game rewards are still compensation.
If you agree that a farmer is not entitled to any (much less all!) of your work output because they contributed to feeding you, you agree that the players are not entitled to the models produced by Niantic.
Maybe I'd accept the argument that a player might be entitled to the model generated by training on _only_ that player's data, but I think we'd agree that would be a pretty worthless model.
The value comes from the work Niantic put in to collate the data and build the model. Someone who contributed a tiny fragment of the training data isn't entitled to any of that added value (much less all of it, as the OP was seeming to demand), just like a farmer isn't entitled to any of your work output (much less all of it!) by contributing a fragment of your caloric intake.
They got to play the game for free, and I'm fairly sure what Google is doing here is within the terms and conditions that people agreed to.
(And I don't even mean only that it complies with the exact wording of the fine print that nobody reads anyway, but also that everyone expects the terms-and-conditions to say that the company owns all the data. So no surprises to anyone.)
Welcome to the modern internet. While you're at it, please get me access to
Google's captcha models
facebook face directory
Google's GPS location data hoard, (most every android phone on the planet 24/7 (!) and any iPhone navigating with gmaps)
And so on and so on
All of which I've directly contributed to and never (directly) recieved anything in return
> All of which I've directly contributed to and never (directly) recieved anything in return
To be fair, you received a service for free that you may have otherwise had to pay for. I'm not saying it's just, but to say you didn't get anything in return is disingenuous.
Agreed. I mostly meant that I'll never see the actual dataset that I contributed to. That's why I'd prefer to spend my time on things that I can see, like OpenStreetMap :)
While you weren't paying for it with currency, the service is most certainly not "free". There's still a transaction happening when you use the service, albeit a transaction the service provider refuses to acknowledge outside the terms of service.
Not saying you are saying this but it amused me how many people believe(d) that Apple wasn’t mining and hoarding location data either because well, they’re Apple and they love you. All those traffic statuses in Apple Maps on minor side streets with no monitoring came from the … traffic fairy, perhaps.
Everything “free” coming from a company means they’ve found a way to monetise you in some way. The big long ToS we all casually accept without reading says so too.
Other random examples which appear free but aren’t: using a search engine, using the browser that comes with your phone, instagram, YouTube… etc.
It’s not always about data collection, sometimes it’s platform lock-in, or something else but there is always a side of it that makes sense for their profit margin.
Hiding shady or unexpected stuff in the TOS is illegal in the EU and other countries for example. So just because some companies behave amoral, that doesn’t mean we just have to accept hundreds of pages of legalese being able to dictate us.
I don’t think there is something amoral here. Niantic explicitly sends players to take videos of places for rewards. It’s not like it’s done in a sneaky way.
Being somehow surprised they actually plan to do things with the data they have you gather is a bit weird.
Of course there was consent. There is even an explicit EULA listing in plain writing that you are actually collecting data for them that people have to agree to before playing.
That people suddenly wake up to the fact that they were dumb for providing labour for worthless virtual gifts doesn't magically allow them to claim it was abuse post-fact.
If people don’t read or understand the EULA, then it violates the spirit of the legislation (not to mention it’s plain shady). Consent must be voluntary (opt-in) and informed.
You can spin this both ways. So if I include a 12,000 page EULA with my product, you're the idiot if page 8,172 includes a footnote that allows me to sell your data, but uses terms defined a few thousand pages earlier, so you actually have to read all of it?
You can play these shenanigans with businesses, but I for one am happy such behaviour is illegal here when selling to consumers.
I absolutely agree with you that this should not be the norm. The fact is that "they" absolutely do it and even give you "rewards" for your behaviour and actions in the free game. Reminds me of a certain opiod crisis, but now it is combining software with the human phyche almost directly.
Niantic have never made a secret of the fact that they're crowdsourcing to enrich their mapping data (eg data from Wayfarer and Ingress was used to seed Pokemon Go and Wizards Unite). I can't see it as a sudden gotcha, as it's practically their USP.
Which are surely, totally not ingesting every iota of data they can get their hands on (legally or not, including your prompts) for training and the soon-to-be born “embedded ads”.
and who is funding them? how are they paying for their servers? a product can't be free, someone somewhere is paying for it. the main question is why are they paying for it.
All companies should be truthful, forthcoming, and specific about how they will use your data, but…
If you enjoy the game, play the game. Don’t boycott/withhold because they figured out an additional use for data that didn’t previously exist.
Another way of viewing this: GoogleMaps is incredibly high quality mapping software with lots of extra features. It is mostly free (for the end user). If no one uses it, Google doesn’t collect the data and nobody can benefit from the analysis of the data (eg. Traffic and ETA on Google Maps)
There’s no reason to hold out for a company to pay you for your geolocation data because none of them offer that service.
I would argue that's being legally truthful, but not practically truthful. The company knows there are ways they can ensure their consumers are aware of the truth. And they know that burying it in Ts and Cs isn't one of them.
I'm inclined to agree with your distinction in general. But not in this particular case:
Everybody knows, even without actually bothering to read the terms-and-conditions, that they will say that the company owns all the data. Letter and spirit agree.
In some sense reading the the T&Cs might actually be detrimental to your understanding: you might misinterpret the carefully lawyered language to conclude that there are certain limits to what the company can do with your data. But they are probably way better than you at interpreting legal terms they crafted themselves.
I wish it were that simple but I think it's reasonable to hesitate. We don't know what these models are going to be used for. If by playing you're unwittingly letting something powerful fall into the wrong hands, maybe play something else.
(Generally speaking. I'm not trying to throw stones at Niantic specifically here.)
It may surprise you to learn pokemon go is nearly a 10 year old game based on 40 year old beloved IP that when it was released did not exist in the same data hellscape we do today, and even if it did, the attraction of the IP would overrule people thinking about this kind of thing. These kinds of comments are extraordinarily disingenuous sounding, particularly when anyone can spend 3 seconds and figure out their primary market is literal children.
> [...] when it was released did not exist in the same data hellscape we do today [...]
That was fairly obvious at the time. And people used more or less exactly the same language to describe the world back then, too.
> These kinds of comments are extraordinarily disingenuous sounding, particularly when anyone can spend 3 seconds and figure out their primary market is literal children.
Poke Mongo was popular with people of all age groups, and (most) children have parents or other guardians to help them with these decisions.
I believe Google explicitly stated that they used data collected from Ingress (arguably a predecessor to Pokemon Go) at the time. It's the reason Niantic was founded. It's hard to take these complaints seriously.
You've also been tricked into making your comment, which will undoubtedly be fed into an LLM's training corpus, and someone will be profiting off that, along with my comment as well. What a future we live in!
If anything, it makes me sad that Google sat and watched OpenAI tarnish public recognition of AI, and meanwhile no one had figured out realistic digital persona reconstruction from Twitter logs applicable to arbitrary users. I've made so much of my "effort" into the Internet digital corpus with that purpose in mind!
Note to future digital me, do as I did 2007-2014. I approve.
Lol, do you really think that? I did it from having a desire to contribute to the conversation and I was aware that that would be a future possibility :) I'm not really getting much in return or being incentivized by Y combinator
I think the joke was that it's kind of the same with Pokemon GO. You play the game mainly because it's fun or lets you get some exercise in, so it's not really a bad thing that the company used the data to train a useful model. You're still having fun or doing exercise regardless of what they do with the data. Essentially, it's a positive externality: https://www.economicshelp.org/micro-economic-essays/marketfa...
But I think your point, if I understand it correctly, is that the in-game rewards kind of "hacked your brain" to do it, which is the part you're objecting to?
I think that's part of it-
but another part is a lot of people do not like what Gen AI is doing and are offended that what was a fun game is now part of that project.
Like when captchas were for making old books readable it felt a lot more friendly than now where its all driverless car nonsense
Because the goal is to replace you with a machine and to widen the poverty gap. Also because I do not consent to it.
Are you also fine with taking pictures of pretty women on the street (hey, they'd be walking there anyway) and posting them online and farming ad revenue? Or training a model on their likeness for porn?
Every major website including Reddit and Imgur have TOS language saying they can do basically anything they want with content you add to their platforms, including AI training
Sure, what does that have to do with third parties scraping shit and training their models on it? Which is exactly how these ai bros started their empire? These terms of services were updated after the genie was out of the bottle. Claiming otherwise is revisionist.
is a webcam of Times Square, and they've got ads on the page, and they're making money off pictures of pretty men and women on that street. I don't know how okay or not I am with it, but it's the world we live in.
tbh I don't think its a bad argument. There's plenty of things I'd do to be nice to a fellow person that I would Not do for the benefit of a large company.
What they're doing is (IMO) evil and anti-human and I do not want to be part of it
Because AI is going to create a world where only a few hundred trillionaires and a few thousand billionaires exist while everyone else is in desperate poverty.
Imagine how those of us who played Ingress (Niantic's first game) feel... We were tricked into contributing location data for the game we loved, only to see it reused for the far more popular (and profitable) Pokemon Go.
Why would anyone take issue with this? Asking as someone who tried both games at different points.
Niantic was always open with the fact that they gather location data, particularly in places cars can't go - I remember an early blog post saying as much before they were unbundled from Google. No one was tricked, they were just not paying attention.
They were pretty up-front about it bring a technology demo for a game engine they were building. It was obvious from the start that they would build future games on the same platform.
Right? I feel like I'm taking crazy pills here and on Lemmy, the whole point of ingress was that it was made to sell Google mapping data and point of interest data, that's why the game didn't have monetizing practices for so long (of course it started having them once all the data was sold but hey)
I'm with you and the previous commenter. People who feel "tricked" we're only fooled by their own blindness. Sorry, but then trying to garner sympathy for that is like being asked to feel bad for the stripper that takes her clothes off for money; they both 100% knew what they were getting into and no other reasonable expectations can be had from engaging in that situation.
Facebook has been around something like a decade, now? I forget the exact number, but it's been long enough that everyone should have learned their lesson at this point; if you are creating data, be it personal, geospatial or otherwise, by using a product expect that data to be used as a commodity by the makes of said product.
Do you honestly feel tricked that a gameplay mechanic which transparently asks you to record 50-100MB videos of a point-of-interest and upload it to their servers in exchange for an (often paid/premium) in-game reward was a form of data collection?
I don't think I've done any in PoGo (so I know it's very optional), but I've done plenty in Ingress, and I honestly don't see how it's possible to be surprised that it was contributing to something like this? It is hardly an intuitively native standalone gameplay mechanic in either game.
> They consistently incentivize you to scan pokestops (physical locations) through "research tasks" and give you some useful items as rewards.
There are plenty of non-scan tasks you can do to get those rewards as well but I do think Poffins (largely useless unless you are grinding Best Buddies) are locked behind scan tasks.
Source: Me. This is the one topic I am very qualified to speak to on this website.
Frankly given the numbers of hours of entertainment most people got out of Pokémon Go, I suspect this might be one of the cases where people have been best compensated for their data collection.
Frankly, with the amount of real-world walking required to progress in Ingress and Pokémon Go, most players were compensated by the motivation to get a decent amount of exercise, which had a net positive impact on their health. Most exercise apps require users to pay subscriptions for the pleasure of using them.
> I have been tricked into working to contribute training data so that they can profit off my labor.
you werent tricked - your location data doesn't belong to you when you use the game.
I don't get why people somehow feel that they are entitled to the post-facto profit/value derived from the data that at the time they're willingly giving away before they "knew of" the potential value.
Yeah, they did the same in Ingress: film a portal (pokéstop/gym) while walking around it to gain a small reward. I've always wondered what kind of dataset they were building with that -- now we know!
At some point can we agree that if we don't pay anything for something and we experience something fun, it's ok for the company to get something for investing millions of dollars in creating the experience for us in return?
If you weren't aware until now and were having fun is this outcome so bad? Did you have a work contract with this company to provide labor for wages and they didn't pay you? if not, then I don't think you can be upset that they are possibly profiting from your "labor".
Every time we visit a site that is free, which means 99.9% of all websites, that website bore a cost for our visit. Sometimes they show us ads which sometimes offsets the cost of creating the content and hosting it.
I am personally very glad with this arrangement. If a site is too ad filled, I just leave immediately.
With a game that is free and fun, I would be happy that I didn't have to pay anything and that the creator figured out a way for both parties to get something out of the deal. Isn't that a win-win situation?
Also, calling your experience "labor" when you were presumably having fun (if you weren't then why were you playing without expectation for payment in return?) is disingenuous.
At some point we need to be realistic about the world in which we live. Companies provide things for free or for money. If they provide something for "free", then we can't really expect to be compensated for our "labor" playing the game and that yes, the company is probably trying to figure out how to recoup their investment.
Honestly you should have assumed they were using the collected data for such a purpose. It would be shocking if they weren't doing this directly or selling the data to other companies to do this.
But did you really scan the items they wanted? Most people in my local community scan their hands or the pavements around the pokestop.
They have a great map of London pavements if they want to do it.
This title is editorialized. The real title is: "Building a Large Geospatial Model to Achieve Spatial Intelligence"
> Otherwise please use the original title, unless it is misleading or linkbait; don't editorialize.
My personal layman's opinion:
I'm mostly surprised that they were able to do this. When I played Pokémon GO a few years back, the AR was so slow that I rarely used it. Apparently it's so popular and common, it can be used to train an LGM?
I also feel like this is a win-win-win situation here, economically. Players get a free(mium) game, Niantic gets a profit, the rest of the world gets a cool new technology that is able to turn "AR glasses location markers" into reality. That's awesome.
I'm pretty sure most of the data is not coming from the AR features. There are tasks in the game to actually "scan" locations. Most people I know who play also play the game without the AR features turned on unless there's an incentive.
I feel like I'm going mad, if you actually read the article it's a theoretical thing they'd like to lead in, yet literally every comment assumes it launched. The title being "announces model" rather than the actual title certainly doesn't help.
It's OK to adjust the title to have more relevant facts or to fix a poorly worded one. Editorializing is more like 'Amazing: Niantic makes world-changing AI breakthrough'.
The original title was not poorly worded though. The new one was editorialized to get a certain reaction out of readers — I promise you the responses on this thread would look different with the original title.
The original title fails to explain who is building the model and where the data is coming from. It also implies a discussion of the task of training models, whereas the actual page is an announcement of an intent to train a model.
Many articles only make it to the front page because the submitted title was editorialized. The rules may say one thing, but the incentives are to a subtle balance between editorialization and avoiding flagging due to extreme editorialization with mods only stepping in to correct the title once it's gotten loads of upvotes and comments already.
> the rest of the world gets a cool new technology
The rest of the world gets an opportunity to purchase access to said new technology, you mean! It's not like they're releasing how they generated the models. It's much more difficult to get excited about paid-access to technology than it is about access to tech itself.
True, true, but they can still purchase it. I mentioned that it's a win-win-win situation, which includes Niantic profiting too (not a bad thing, it's a good incentive), which entails selling access to it.
Though as a copyright reformist, I do believe that such models should be released as public domain after 14 years. Though the cloud thing does make these sort of obligations harder to enforce...
All they needed was a shit ton of pictures. The AR responsiveness (and Pokemon Go) have nothing to do with it. It was just a vehicle for gathering training data.
Not wanting to over-do it, but is there possibly an argument the data about geospatial should be in the commons and google have some obligation to put the data back into the commons?
I'm not arguing to a legal basis but if it's crowdsourced, then the inputs came from ordinary people. Sure, they signed to T&Cs.
Philosophically, I think knowledge, facts of the world as it is, even the constructed world, should be public knowledge not an asset class in itself.
I’ve been saying this about Google Maps for years, especially their vast collection of public transport loading data and real time road speeds.
People are duped into thinking they’re doing some “greater good” by completing the in-app surveys and yet the data they give back is for Google’s exclusive use and, in fact, deepens their moat.
It's not solely for Google's benefit. They're ("we're" tbh) contributing data that improves services that we use. It has additional selfish and altruistic benefits beyond feeding the Googly beast.
IIRC Google maps basically does not make money. I wonder if there can be a government deal to subsidize it on the condition that the data be open sourced.
They made 11B$ last year. It has incredible amount of ads. If you haven't noticed, then that means they did a great job. (tip: look for the custom logo pins in the map. Its printing money)
While I have no way to validate this, I highly suspect that the routing algorithm is also subtly manipulated. There is a route I drive with regular frequency that contains a roughly 20 mile section of two mostly parallel roads, one for through traffic and one for local. Every single time I drive through, Google routes me to the local traffic road. I know for certain the local road is slightly slower and it's also simply incorrect. The only way it makes sense is if it's a bug, or my hunch is that Google weights the route a little higher because it goes by a bunch of businesses that pay for advertising.
No. it should be owned by the owners of the land on which these objects are located. You should be able to provide access at different levels of detail to public or private entities that need said access and revoke it at your own will. May be make some money out of it.
3D artist can create a model of a space and offer rights to the owner of the land, who in turn can choose to create his own model or use the one provided by an artist.
I expect any company which collates information about geospatial datasets to release the substance of them, yes. Maybe there's an IPR lockup window, but at some point the cadastral facts of the world are part of the commons to me.
I would think there's actually a lot of epidemiology data which also should be winding up in the public domain getting locked up in medical IPR. I could make the same case. Cochrane reports rely on being able to do meta analysis over existing datasets. Thats value.
They found a creative way to incentivize the collection of it and paid for the processing. Anybody can collect the same data, I don't see why they would have to release it...
Pokemon Go is built on the same engine as Inverness I think its called. When it launched they even used the same POIs. I think this was ~5-7 years before PGO launched.
Edit: I said inverness and meant ingress. Apologies.
Pokemon Go was launched on the Unity game engine in 2016. Ingress was using a different game engine at the time, and wasn't rewritten into Unity until several years later. Even the backend/server side was significantly different, with them needing to write a shim to ensure compatibility during & after the move to Unity.
Perhaps, perhaps not - I have my theories, but is that not what you meant when you said Pokemon Go was built on the same engine as Ingress?
I do think it wasn't until after Pokemon Go launched and they saw the success of it, that they shifted focus to be more of a platform for these types of experiences (see Niantic Lightship). Additionally, I think Unity offered them the opportunity to integrate with ARCore and collect much more detailed data than they would've ever been able to do on the old Ingress engine. In fact, I expect a significant chunk of ARCore functionality was added specifically thanks to Niantic and Unity (in fact, you see Unity mentioned all over the Google Developer docs for it).
I imagine the logs aren’t tied to the engine, which I suppose is the point I should have made without researching which engine the games used as opposed to which company made both games.
Its far cheaper to pay people on bikes to go round places, than it is to do what niantic did. Mind you, they make money hand over fist, so the mapping is a side quest for them.
Hanke’s actually got awards from CIA for his work at In-Q-Tel investing in Keyhole/Niantic, so yeah, safe to assume that the agency invested specifically to have players collect data. Considering many Pokémon were on or near military bases around the world… not hard to assume what CIA’s real goal was.
I was wondering about the privacy implications: given a photo, the LGM could decode it to not just positioning, but also time-of-day and season (and maybe even year, or specific unique dates e.g. concerts, group activities).
Colors, amount of daylight(/nightlight), weather/precipitation/heat haze, flowers and foliage, traffic patterns, how people are dressed, other human features (e.g. signage and/or decorations for Easter/Halloween/Christmas/other events/etc.)
(as the press release says: "In order to solve positioning well, the LGM has to encode rich geometrical, appearance and cultural information into scene-level features"... but then it adds "And, as noted, beyond gaming LGMs will have widespread applications, including spatial planning and design, logistics, audience engagement, and remote collaboration.") So would they predict from a trajectory (multiple photos + inferred timeline) whether you kept playing/ stopped/ went to buy refreshments?
As written it doesn't say the LGM will explicitly encode any player-specific information, but I guess it could be deanonymized (esp. infer who visited sparsely-visited locations).
(Yes obviously Niantic and data brokers already have much more detailed location/time/other data on individual user behavior, that's a given.)
> Colors, amount of daylight(/nightlight), weather/precipitation/heat haze, flowers and foliage, traffic patterns, how people are dressed, other human features (e.g. signage and/or decorations for Easter/Halloween/Christmas/other events/etc.)
I mean, in theory it could. But in practice it'll just output lat, lon and a quaternion. Its going to be hard enough to get the model to behave well enough to localize reliably, let alone do all the other things.
The dataset, yes, that'll contain all those things. but the model won't.
You don't know for sure the model won't contain non-location data, like I noted the additional blurb vaguely said: "And, as noted, beyond gaming LGMs will have widespread applications, including spatial planning and design, logistics, audience engagement, and remote collaboration."
There are a lots of "coulds" "ifs" and "shoulds". But how do you tokenise all those extra bits? For it to function as a decent location system, it has to be "invariant" to weather/light conditions. Otherwise you'll just fall back to GPS.
At it's heart, its a photo -> camera pose (location) converter. The bigger issue is how do you stop it hallucinating the wrong location when it has high uncertainty. That's before you get into scaling issues so that a model can cope with bigger than room scale pointclouds.
the first "public" VPS was released a while ago, yet six years later we still don't see widespread adoption of visual based location, even though its much much more accurate in an urban environment.
Until pretty recently, phone telemetry data was a free-for-all, and if you’re, say, in legal trouble, a map of the location of your phone over the past… however long you’ve had your phone is immediately available.
> For example, it takes us relatively little effort to back-track our way through the winding streets of a European old town. We identify all the right junctions although we had only seen them once and from the opposing direction.
That is true for some people, but I'm fairly sure that the majority of people would not agree that it comes naturally to them.
I really want to know what the NSA and NRO and Pentagon are doing training deep neural networks on hyperspectral imaging and synthetic aperture radar data. Imagine having something like Google Earth but with semantic segmentation of features combined with what material they are made from. All stored on petabytes of NVMe flash.
So. I'm not really sure what to do here given that this was exactly and specifically what we were building and frankly had a lot of success in actually building.
Interestingly, Pokemon GO only prompts players to scan a subset of the Points of Interest on the game map. Players can manually choose to scan any POI, but with no incentive for those scans I'm sure it almost never happens.
> Today we have 10 million scanned locations around the world, and over 1 million of those are activated and available for use with our VPS service.
This 1 in 10 figure is about accurate, both from experience as a player and from perusing the mentioned Visual Positioning System service. Most POI never get enough scan data to 'activate'.
The data from POI that are able to activate can be accessed with a free account on Niantic Lightship [1], and has been available for a while.
I'll be curious to see how Niantic plans to fill in the gaps, and gather scan data for the 9 out of 10 POI that aren't designated for scan rewards.
Somehow I always thought something like that would have been the ultimate use case for Microsoft Photosynth (developed from Photo Tourism research project), ideally with a time dimension, like browsing photos in a geo spatio-temporal context.
I expect that was also some reason behind their flickr bid back then.
I worked on this and yes it was 100% related to the interest in Flickr. At the time Google Street had just become a thing and there was interest in effectively crowdsourcing the photography via Flickr and some of the technology behind Photosynth.
I still don't get what LGM is. From what I understood, it isn't actually about any "geospatial" data at all, is it? It is rather about improving some vision models to predict how the backside of a building looks, right? And training data isn't of people walking, but from images they've produced while catching pokemons or something?
P.S.: Also, if that's indeed what they mean, I wonder why having google street view data isn't enough for that.
> It is rather about improving some vision models to predict how the backside of a building looks, right?
This, yes, based on how the backsides of similar buildings have looked in other learned areas.
But the other missing piece of what it is seems to be relativity and scale: I do 3D model generation at our game studio right now and the biggest want/need current models can't do is scale (and, specifically, relative scale) -- we can generate 3d models for entities in our game but we still need a person in the loop to scale them to a correct size relative to other models: trees are bigger than humans, and buildings are bigger still. Current generative 3d models just create a scale-less model for output; it looks like a "geospatial" model incorporates some form of relative scale, and would (could?) incorporate that into generated models (or, more likely, maps of models rather than individual models themselves).
> And training data isn't of people walking, but from images they've produced while catching pokemons or something?
Training data is people taking dedicated video of locations. Only ARCore supported devices can submit data as well. So I assume along with the video they're also collecting a good chunk of other data such as depth maps, accelerometer, gyrometer, magnetometer data, GPS, and more.
The ultimate goal is to use the phone camera to get very accurate mapping and position. They're able to merge images from multiple sources which means they're able to localize an image against their database, at least relatively.
I’ve published research in this general arena and the sheer amount of data they need to get good is massive. They have a moat the size of an ocean until most people have cameras and depth sensors on their face
It’s funny, we actually started by having people play games as well but we expressly told them it was to collect data. Brilliant to use an AR game that people actually play for fun
I'm guessing this can be the new bot that could play competitively at GeoGuesser. It would be interesting if Google trained a similar model and released it using all the Street Map data, I sure hope they do.
Has anyone done something similar with the geolocated WIFI MAC addresses, to have small model for predicting location from those.
I believe I read somewhere that geoguesser AI based on street view data was mostly classifying based on the camera/vehicle set up. As in, a smudge on the lens in this corner means its from Paris.
This crowdsourced approach probably eliminates that issue.
> Today we have 10 million scanned locations around the world, and over 1 million of those are activated and available for use with our VPS service. We receive about 1 million fresh scans each week
Wait, they get a million a week but they only have a total of 10 million, ie 10 days worth? Is this a typo or am I missing something?
A location probably requires like a million scans to be visualized properly. Think of a park near your house - there are probably thousands of ways to view each feature within.
I don’t see why not. Photos are often combined with satellite data for photogrammetry purposes, even on large scale - see the recent Microsoft Flight Simulator (in a couple days, when it actually works)
It's usually aerial data, especially oblique aerial.
Bing Maps is still pretty unique in offering them undistorted and not draped over some always degraded mesh.
So what they are doing is not different from previous "VPS" systems, its how they are doing it.
What is a "VPS" At its heart, Visual Positioning Systems are actually pretty simple. You build a 3d point cloud of a place, with each point being a repeatable unique feature that can be extracted from an image (see https://blog.ekbana.com/extracting-invariant-features-from-i...) Basically a "finger print"/landmark of a thing in real life that can be extracted from an image reliably.
To make that work, you need to generate a large map of these points: https://www.researchgate.net/figure/Sparse-point-cloud-Figur... Which basically involves taking lots of pictures with GPS tags on where they are. Google has the advantage of street view, Niantic has it's game. Others had to pay a bunch of people to go round a city with cameras.
Once you build that pointcloud (which isn't actually that easy, you can't do it all at once, and aligning point clouds is hard.) you can then use trigonometry to work out where a picture is. This is called "re-localization" which is a stupid name. The hard part is the data management. There are billions of points in the world, partitioning the database so that you can quickly locate a picture is the hard part.
Hence this approach, which is basically "train a model to do it for us" You still get a "VPS", you still need all that data, but they hope that a model will able to optimize for speed.
is it private?
No, the original system isn't private. If they've done their job properly, then nothing identifiable will be in the "map" as thats extra data you dont need. What they do with the raw photos, and the metadata that they contain is another matter.
Even before LLMs, I knew they are going to launch a fine grained mapping service with all that camera and POI data. Now this one is actually much better obviously. Very few companies actually have this kind of data. Remains to be seen how they make money out of this
However, I can't fully agree that generating 3d scene "on the fly" is the future of maps and many other use cases for AR.
The thing with geospatial, buildings, roads, signs, etc. objects is that they are very static, not many changes are being made to them and many changes are not relevant to the majority of use cases. For example: today your house is white and in 3 years it has stains and yellowish color due to time, but everything else is the same.
Given that storage is cheap and getting cheaper, bandwidth of 5G and local networks is getting too fast for most current use cases, while computer graphics compute is still bound by our GPU performance, I say that it would be much more useful to identify the location and the building that you are looking at and pull the accurate model from the cloud (further optimisations might be needed like to pull only the data user has access to or needs access to given the task he is doing). Most importantly users will need to have access to a small subset of 3D space on daily basis, so you can have a local cache on end devices for best performance and rendering. Or stream rendered result from the cloud like nVidia GDN is doing.
Most precise models will come from CAD files for newly built buildings, retrospectively going back to CAD files of buildings build in last 20-30 years(I would bet most of them have some soft of computer model made before) and finally going back even further - making AI look at the old 2D construction plans of the building and reconstructing it in 3D.
Once the building is reconstructed (or a concrete pole like shown in the article) you can pull its 3D model from the cloud and place it in front of the user - this will cover 95% of use cases for AR. For 5% of the tasks you might want real time recognition of the current state of surfaces for some tasks or changes in geometry (like tracking the changes in the road quality compared with the previous scans or with reference model), but these cases can be tackled separately and having precise 3D model will only help, but won't be needed to be reconstructed from scratch.
This is a good 1st step to make a 3D map, however there should be an option to go to the real location and make edits to 3D plan by the expert so that the model can be precise and not "kind of" precise.
I don't think so. I wanted to voice this quickly without a detailed rebuttal as yours is the top comment and I don't think it's correct. Hopefully someone will do my homework for me (or alternatively tell me I'm wrong!).
I wonder if there's a sweet spot for geospatial model size.
A model trained on all data for 1m in every direction would probably be too sparse to be useful, but perhaps involving data from a different continent is costly overkill? I expect most users are only going to care about their immediate surroundings. Seems like an opportunity for optimization.
Waymo is supposedly geofenced because they need detailed maps of an area. And this is supposedly a blocker for them deploying everywhere. But then Google goes and does something like this, and I'm not sure, if it's even really true that Waymo needs really detailed maps, that it's an insurmountable problem.
Conversation about ‘players are the product’ of Pokémon go aside… What are some practical applications of an LGM?
Seems like navigation is ‘solved’? There’s already a lot of technology supporting permanence of virtual objects based on spatial mapping?
Better AI generated animations?
I am sure there are a ton of innovations it could unlock…
"It could help with search and rescue" jokes aside [1] this seems really useful for robotics. Their demo video is estimating a camera position from a single image, after learning the scene from a couple images. Stick the camera on a robot, and you are now estimating where the robot is based on what the robot has seen before.
They are a bit vague on what else the model does, but it sounds like they extrapolate what the rest of the environment could look like, the same way you can make a good guess what the back side of that rock would look like. That gives autonomous robots a baseline they can use to plan actions (like how to drive/fly/crawl to the other side) that can be updated as new view points become available.
It may not be Geospatial data at all and I'm not sure how much the users consented but the data collection strategy was well crafted. I remember recommending building a game to collect handwriting data from testers (about a thousand), to the research lab I worked for long time back.
This looks like another use of data not following the original purpose of the collected data. Clearly it should be illegal to use any such data without asking every single user whose data they want to use for consent. And by that I do not mean some extortion scheme.
It seems I was unable to generate the image for the "SWAT ESPORT" logo at this time. Let me know if you would like me to try again or if you'd like to adjust the description.
I’m intrigued by the generative possibilities of such a model even more than how it could be used with irl locations. Imagine a game or simulation that creates a realistic looking American suburbia on the fly. It honestly can’t be that difficult, it practically predicts itself.
People complaining here that you are somehow owed something for contributing to the data set, or that because you use google maps or reCAPTCHA you are owed access to their training data.
I mean, I'd like that data too. But you did get something in return already. A game that you enjoy (or your wouldn't play it), free and efficient navigation (better than your TomTom ever worked), sites not overwhelmed by bots or spammers.
Yeah google gets more out of it than you probably do, but it's incorrect to say that you are getting 'nothing' in return.
The company was formed as Niantic Labs in 2010 as an internal startup within Google, founded by the then-head of Google's Geo Division (Google Maps, Google Earth, and Google Street View).
It became an independent entity in October 2015 when Google restructured under Alphabet Inc. During the spinout, Niantic announced that Google, Nintendo, and The Pokémon Company would invest up to $30 million in Series-A funding. Not sure what the current ownership is (they've raised a few more times since then), but they're seemingly still very closely tied with Google.
Going to try to clear this up from speculation as best I can.
Niantic was a spinoff divested from Google Maps roughly a decade ago who created a game called Ingress. This used Open Street Maps data to place players in the real world and they could designate locations as points of interest (POI), which Niantic used human moderators to judge as sufficiently noteworthy. Two years after Ingress was released, Niantic purchased limited rights to use Pokemon IP and bootstrapped Pokemon Go from this POI data. Individual points of interest became Pokestops and Gyms. Players had to physically go to these locations and they could receive in-game items needed to continue playing or battle other Pokemon.
From the beginning, Pokemon Go had AR support, but it was gimmicky and not widely used. Players would post photos of the real world with Pokemon overlaid and then turn it off, as it was a significant battery drain and only slowed down your ability to farm in-game items. The game itself has always been a grind type of game. Play as much as possible to catch Pokemon, spin Pokestops, and you get rewards from doing so. Eventually, Niantic started having raids as the only way to catch legendary Pokemon. These were multiplayer in-person events that happened at prescribed times. A timer starts in the game and players have to be at the same place at the same time to play together to battle a legendary Pokemon, and if they defeat it, they'll be rewarded with a chance to catch one.
Something like a year after raids were released, Niantic released research tasks as a way to catch mythical Pokemon. These required you to complete various in-game tasks, including visiting specific places. Much later than this, these research tasks started to include visiting designated Pokestops and taking video footage, from a large enough variety of angles to satisfy the game, and then uploading that. They started doing this something like four or five years ago, and getting any usable data out of it must have required an enormous amount of human curation, which was largely volunteer effort from players themselves who moderated the uploads. The game itself would give you credit simply for having the camera on while moving around enough, and it was fairly popular to simply videotape the sidewalk and the running game had no way to tell this was not really footage of the POI.
The quality of this data has always been limited. Saying they've managed to build local models of about 1 million individual objects leaves me wondering what the rate of success is. They've had hundreds of millions of players scanning presumably hundreds of millions of POI for half a decade. But a lot of the POI no longer exist. Many of them didn't exist even when Pokemon Go was released. Players are incentivized to have as many POI near them as possible because this provides the only way to actually play, and Niantic is incentivized to leave as much as they can in the game and continually add more POI because, otherwise, nobody will play. The mechanics of the game have always made it tremendously imbalanced in that living near the center of a large city with many qualifying locations results in rich, rewarding gameplay, whereas living out in the suburbs or a rural area means you have little to do and no hope of ever gaining the points that city players can get.
This means many scans are of objects that aren't there. Near me, this includes murals that have long been painted over, monuments to confederate heroes that were removed during Black Lives Matter furors of recent years, small pieces of art like metal sculptures and a mailbox decorated to look like Spongebob that simply are not there any more for one reason or another, but the POI persist in the database anyway. Live scans will show something very different from the original photo that still shows up in-game to tell you what the POI is.
Another problem is many POI can't be scanned from all sides. They're behind fences, closed off because of construction, or otherwise obstructed.
Yet another problem is GPS drift. I live near downtown Dallas right now, but when the game started, I lived smack dab in the city center, across the street from AT&T headquarters. I started playing as something to do when walking during rehab from spine surgeries, but I was often bedridden and couldn't actually leave the apartment. No problem. I could receive sometimes upwards of 50km a day of credit for walking simply by leaving my phone turned on with the game open. As satellite line of sight is continually obstructed and then unobstructed by all the tall buildings surrounding your actual location, your position on the map will jump around. The game has a built-in speed limit meant to prevent people from playing while driving, and if you jump too fast, you won't get credit, but as long as the jumps in location are small enough to keep your average over some sampling interval below that limit, you're good to go. Positions within a city center where most of the POI actually are is very poor.
They claim here that they have images from "all times of day," which is possibly true if they literally mean daylight hours. I'm awake here writing this comment at 2:30 AM and have always been a very early riser. I stopped playing this game last summer, but when I still played, it was mostly in darkness, and one of the reason I quit was the frustration of constantly being given research tasks I could not possibly complete because the game would reject scans made in the dark.
Finally, POI in Ingress and Pokemon Go are all man-made objects. Whatever they're able to get out of this would be trained on nothing from the natural world.
Ultimately, I'm interested in how many POI the entire map actually has globally and what proportion the 1 million they've managed to build working local models of represents. Seemingly, it has to be objects that (1) still exist, (2) are sufficiently unobstructed from all sides, and (3) in a place free from GPS obstructions such that the location of players on the map is itself accurate.
That isn't nothing, but I'm enormously skeptical that they can use this to build what they're promising here, a fully generalizable model that a robot could use to navigate arbitrary locations globally, as opposed to something that can navigate fairly flat city peripheries and suburbs during daylight hours. If Meta can really get a large enough number of people to wear sunglasses with always-on cameras on them, this kind of data will eventually exist, but I highly doubt what Niantic has right now is enough.
When users scan their barcode, the preview window is zoomed in so users think its mostly barcode. We actually get quite a bit more background noise typically of a fridge, supermarket aisle, pantry etc. but it is sent across to us, stored, and trained on.
Within the next year we will have a pretty good idea of the average pantry, fridge, supermarket aisle. Who knows what is next
This is outrageously unethical. Someone scanning a barcode would have every reason to think that the code was being parsed locally on their phone. There would be no reason to upload an entire photo to read a barcode. Beyond which, not even alerting the user visually that their camera is picking up background stuff???
What if it's on their desk and there are sensitive legal documents next to it? How are you safeguarding all that private data? You could well be illegally in possession of classified documents, unconsenting nudes, all kinds of stuff. And it sounds like it's not even encrypted.
Look, I will now defend my lack of a sense of humor. That post was 5 minutes old and I was the first person to respond to it. If the poster had <10 posts I would have assumed it was a troll. As sib @gretch writes, I extended them faith that they were earnest.
I will say that the bit about showing users only the barcode but capturing photos outside that was pretty clever; it's the kind of detail that belongs in a Neal Stephenson novel. But that's exactly the kind of thing that a million startups would do right now. Yea in retrospect it's kinda stupid that someone would admit this and also be proud to get a better set of photos of refrigerators and supermarket aisles.
So, is this a grade-A 2024 version of Andy Kaufman comedy that requires just one dolt in the audience to take it seriously? Hah. I guess if so it wouldn't be funny unless someone like me took the bait. I see the humor. But if you analyze why it was funny, the primary reason would be the fact that it was so possible to take it seriously. Especially with 134 or so upvotes, the user writing had exactly the amount of cachet as someone who had interned at a sleazy startup for 2 months and was proud of something really stupid.
This post’s replies makes it clear a lot of us don’t recognize humor. Do people really think MyFitnessPal is trying to build a model of the average pantry?
The humor isn’t recognized because the humor isn’t there. To be funny there has to be a setup, a punchline, some kinda joke structure. Humor isn’t just saying false things…
Imagine a comedian saying this on stage, how many laughs would that get?
> Do people really think MyFitnessPal is trying to build a model of the average pantry?
We’ve all seen dumber things that are real. Juicero is my personal favorite example.
The problem is that it's not possible to make a parody of an unethical company so blatant that it wouldn't also be a 100% plausible description of a business practice that some company actually does...
If this is real, I hope MyFitnessPal doesn't operate in the EU.
Or rather, I hope they do, and receive an appropriate fine for this, if not even criminal prosecution (e.g. if the app uploaded nonconsensual pornography of someone visible only in the cropped out space).
The policy defines "Services" as the mobile app and website. How is building a general purpose model for what the average fridge looks like used to customise either the website or the app? This feels like the kind of flimsy reasoning that only holds so long as no one is challenging it.
Easy. They provide this new general purpose model through the website. Bam, that's a Service that uses photos to customize. They can also expand what counts as a Service unilaterally.
With this broad of a privacy policy, they can start MyFitnessPal.com/UncroppedCandidPhotos where they let people search for users by name, email, or phone and sell your photos to the highest bidder, and that still would count as a Service that uses photos to customize. You consented to it!
> This feels like the kind of flimsy reasoning that only holds so long as no one is challenging it.
No, it is written by professional lawyers to be as permissive as possible.
> No, it is written by professional lawyers to be as permissive as possible.
But you repeat myself.
OK, say they do all that, that isn't customisation (I would argue) it is a new service that was built from unconsented data scraped from users of the pre-existing services. Call that splitting hairs if you like, but this looks like a risk to me.
If this is real and not a joke, I bet some DPA will disagree if this is brought to their attention. Effective consent under GDPR requires informed consent.
Giving their policy an (admittedly quick) skim there doesn't seem to be any section that mentions AI, LLMs, training any kind of model, using image data from barcode pictures, etc. I'd be very curious to see the explanation of how this is baked into the policy.
I’m not exactly shocked that it could exist. But this usage (beyond the scope of processing barcodes) seems like it couldn’t be construed to fit into the normal avenues of data collection under a privacy policy.
Also with regard to training specifically, this policy was created in late 2020 so I don’t know how it would cover generative models.
This is a vision document, presumably intended to position Niantic as an AI company (and thus worthy of being showered with funding), instead of a mobile gaming company, mainly on the merit of the data they've collected rather than their prowess at training large models.