Hacker News new | past | comments | ask | show | jobs | submit login
Pokémon Go Fest attendees to get refunds as technical issues break the event (techcrunch.com)
109 points by janober on July 22, 2017 | hide | past | favorite | 99 comments



As a long term Ingress player i am not surprised by this _at all_. Niantics management has proven itself again and again and again as being completely incompetent at everything they do. (They even to manage to ban their own official event photographer after he had GPS problems while traveling to Paris).

I used to be angry at them for ruining Ingress (one of the best games ever made) but nowadays I am just amazed at the level of sheer incompetence.

They managed to learn _nothing_ from 4 years of Ingress and I think now it's time that the whole upper management is fired so that the company can start fresh.


The scale of this event is bigger than any Ingress anomaly until now. Also, all players seem to be concentrated into a smaller area than Ingress events, which further complicates things. I think the main issue they could not control today is that Verizon ran out of bandwidth and people simply couldn't connect. Another difference with Ingress events - the Go Fest is a closed venue with security and ticket validation at the gate. In Ingress, you usually validate your ticket beforehand and don't have to worry about queues in the hours before the actual event stats.


Did they buy extra coverage for the event? I've been to football games and mardi gras and they don't have issues with coverage and that's a lot more than 20k people even if they're not all playing it should be similar unless PMG is really really bad with bandwidth.


They apparently had exactly 0 mobile cells. Anyone who's been asked has said they didn't notice any.


That seems like the most obvious thing you would do for an event like this. Are they really that bad?


Bandwidth amount wise - really not that much.

Looking at my history (pretty active player) I've used about 184 MB since June 24th (I am not as active on weekends so for the sake of this example I actually played 18 days out of the month averaged about the same per day). On average that means about 10.2 MB per day of playing (looking at the graph, however, I would say I used about 20 MB the day of the event as I was out for about twice as much time that day).

I don't think that is considered a lot


I realize this statement probably won't go over well here, but Niantic's behavior strikes me as the kind of thing that happens when a company is entirely run by software engineers who refuse to accept relevant advice from anyone who isn't a software engineer.

There are people who understand event planning, there are people who understand how to build F2P mobile games, and I get the impression that exactly zero of these people were consulted, or, if they were, their advice was immediately ignored.


Anyone who's played Pokemon Go or Ingress can refute this; their UI design is abysmal at the very best, the game suffers a million bugs and patches introduce at least as many as they fix, etc. A company ran by Software Engineers would have hired competent UI developers and QE engineers and would have demoed the hell out of the event before putting it into production...

No, I'd argue Niantic may originally have been engineering driven, but once Ruthless cut them from Alphabet, they went straight for the greed play, developed with an IP that would sell billions with next to no effort, and are now tightening the screws on the player-base with Pay-to-Win gym tickets and events requiring as many Lures and Incenses as you can buy to continue extracting income.


> A company ran by Software Engineers would have hired competent UI developers and QE engineers and would have demoed the hell out of the event before putting it into production

UI developers? Testing? You have apparently worked with very different software engineers than I have! Most I've worked with have followed the "if users don't understand the UI it's their fault" UI design philosophy, and all testing is handled in production because no one wants to put emphasis on testing when they could be working on the next feature.


If that were the case you would hope they would at least be good at software engineering, but all evidence points to that being completely false.


I think that's overstating it a bit. I went to an Ingress anomaly in Brooklyn last summer and everything worked fine. It wasn't the smoothest run event ever, and they could have used more staff, but it was an enjoyable time.

There's something different about what they were trying to do with Pokemon Go that made it suck. They don't have experience with these yet. And if someone made long distance travel plans, just getting your entry ticket refunded doesn't begin to make up for your loss.


As an Ingresser myself, I'd say there were two big sources of issues vs. your typical Ingress anomaly, and one that would happen in Ingress if it was a bit closer in popularity.

First off, they had everything in one place. For those who haven't played Ingress to that point, anomalies in Ingress usually take place with a primary and two secondary locations in, roughly, Asia, the Western Hemisphere, and Europe. This means that people aren't usually traveling from far away (which probably would increase overall participation) but also that each site's numbers are lower. I figure overall anomaly attendance on a given anomaly weekend might well rival that of this event -- but they're spread out in geography and time.

The second aspect is that Ingress has much more encouragement in-game of local organization. I understand quite well why Pokemon Go lacks in-game chat, but it saps local organization. Locals play an important, perhaps critical, role in Ingress event organization, and provide Niantic with free boots on the ground to figure things out. I know there is some local organization going on in Pokemon Go, but it's forced to be based on random encounters, and folks are still unsure whether to organize on team lines or just geographically. (With the current setup, I'd say geographically -- if there's anything to be learned from Ingress, it's to tamp down factionalism.)

The last thing, the one that I think would hurt Ingress equally? It seems there wasn't a limit on ticket sales. In Ingress some special extra-value packs that get you extra tchotchkes both virtual and physical exist, but the base 'free' ticket is unlimited. Similarly, here. They probably wanted to avoid any chance of scalping or the like, but imagine if they limited it to, say, a quarter the attendance?


I'm making a massive location-based game myself right now (currently have 140 million POIs in the USA alone) and have been studying how Niantic is making decisions for about 9 months now. It's pretty crazy how poorly they treat every misstep they make, though I think they did a good job quickly giving refunds, rewards, etc after this one, as well as a decent sounding apology.

Of course, they should have been better prepared for the event, but they did seem to do better at the cleanup this time. So... baby steps?


Just as glad I didn't get a ticket ($20? sure. Scalper prices? Heck no) and waste the time. This whole thing is going to turn into a real fiasco for some time due to the number of people who bought resold passes (so the original seller sold it and now gets their original $20 back), people who traveled (I know many got hotels, not sure how many flew in), and questions of how they're going to get that in-game currency to people (reportedly some are still in line to get in, while others are waiting in a not-quite-as-long line just to get out).

Each person attending has a uniquely-numbered wristband pass, and apparently on entry they're getting an envelope with a patch and a unique-to-them QR code which needs to be scanned after spinning a special "Pokestop" and effectively checks you into the event. Since the game is crashing basically at startup for most folks, that makes it a bit of a challenge just to get in. Still, I suspect the in-game credit is going to be based on people who checked in with that code since the ordering process didn't involve providing a game login.

Also, I've seen some folks questioning why they didn't have WiFi set up, but I'm pretty sure that the way WiFi works would make that almost impossible anyway (2.4 would be a bad joke no matter what, 5GHz might be possible over most/all available channels, APs that can handle hundreds or thousands of simultaneous connections are rare and probably require major advance planning, AP broadcast power would probably have to be quite low to allow large numbers of APs in a physically small area, etc.).


A company I used to work for specialized in exactly this case - high density WiFi. They have an 8 radio / 24 antenna AP: https://www.xirrus.com/product/xr-4000-series/

You can use 6 20 MHz channels in the 5GHz band and 1 or 2 channels in the 2.4 GHz band, on each of these multi-APs, and you can get much smaller "cell sizes" (low power signal, smaller signal area) than cellular providers. You can bump clients from busier channels on to less busy ones to force better balance.

WiFi drivers for clients and APs are a big pile of bugs - the resilience (retries at lower rates, chip-timeout-resets, etc) covers a lot of them up. You can get more consistent throughput and latency by digging in and fixing these bugs (and working around bugs in most clients). Most good WiFi APs are debugged until they work pretty well for the "typical" home or office use-cases. But the Xirrus APs (and some competitors to some degree) are debugged for the "high density" use-case - multiple hundreds of users per multi-AP, and maybe 8 to 30 of these multi-APs in a large conference hall or stadium.

You have to know what you're doing. It is expensive. Niantic has a hugely popular service. They're making money. But they never seem to know how to handle the massive scale that was predictable after the first month or so. They sold tickets, they knew how many were coming. There are people who know how to handle that. Niantic just didn't feel like doing the research to find out how or who to handle it. Their systems fall over again and again and again.


There are definitely companies have the kind of wifi infrastructure you are talking about who offer rentals to the people who organize these kinds of events. Here is an example of one[0].

Wifi is often deployed at large events (PAX East for example) and it is more then feasible if the company is willing to pay the money for another company to do it.

However, their server infrastructure is another question entirely. In today's age, making scale-able network services with very little downtime is not something that requires a dedicated networking team and hundreds of millions of dollars. I can't imagine they have very many good excuses to why they continue to allow this to happen other than that they don't value investment into their infrastructure enough (despite being a company completely dependent on the internet)

The problems that they are experiencing are ones that have caused problems and had solutions created for them for years. The just seem to be unwilling to properly invest in the solutions to fix them, which are cheaper than ever (considering the complexity of the issues in the grand scheme of things).

[0] https://tradeshowinternet.com/solutions/event-organizers


They may also be having load-balancer issues assuming they have front-end servers that are checking whether accounts are supposed to be directed to a specific set of servers. Assuming that they normally load-balance based on geographic location, the check for specialized server connections may be something that they never actually load-tested before after confirming that it worked for their staff in testing.

I asked earlier and have asked again whether anyone at the location but not officially attending is having better luck connecting, that would help to narrow it down between carrier issues, Niantic server issues and Niantic server-for-this-event issues

Edit: They expanded the area for special/regional spawns to 2 miles around Grant Park but still only for people who "checked in" at the park, and people who've left are reporting much better luck when they're away from the park. That may indicate that most of the issue was insufficient networking.


People in a stadium aren't going to be quite as focused on their phones, but lots of people in a tight space isn't an unusual occurrence for networks.


Stadiums have issues with this too. I was at Marlins Park for the All-Star series. During the Home Run Derby I had press access, which had special Wifi credentials on a completely separate network.

It was down for the entire game. A lot of hotspots popped up in the press area.

The during the actual All-Star game I tried the regular stadium Wifi. It was down for most of the game.

T-Mobile was up for most of the game, so I was still able to publish photos and read people's reactions. A good thing as T-Mo was the main sponsor for the games.


It's a challenge, but a very well-characterized one that all enterprise-grade access points are built to handle. Lots of access points, with radio power turned way down to prevent neighbor interference, and channels selected to avoid overlap with neighboring APs.

Shameless plug - my ex-employer Meraki is particularly good at supplying these systems for events, since they have automatic channel-selection and good tooling for selecting power levels, and let you set up configurations at your leisure that the APs will use as soon as they get an internet connection and fetch the configs from Meraki servers.


~15k people in a small section of Grant Park all trying to play a somewhat data-intensive online game that requires constant communication with servers is a VERY different activity stream than people in a stadium. Might be closer to think of those people in a stadium all trying to stream video of the event.


Apparently the game is a few megabytes per hour (this makes sense to me, the map isn't very detailed, a lot of other stuff is very cache-able).

So I think the experience in stadiums will translate okay. Of course it won't be the same, but it's tens of thousands of radios in high proximity.


Is the game that data-intensive? What data is there to send/receive that you're comparing it with streaming video?


The game uses google's protobuf protocol which makes communication more efficient and the game updates (I can only tell from <= 0.35 analysis) all 10-30 seconds like loading pokemon spawns and map entities of the current map area.

The most expensive data in 0.35 was:

- Pokemon models (They were dynamically loaded but cached then)

- Pokestop images (Server sends image url and client searches for a chached version)

- Syncing the player state with the client (Inventory, rewards, egg states..)

- Updating the map (Future pokemon spawns, pokestops/gyms in view..)

If a player is idle and doesn't move, just keeps farming a single pokestop and battles sometimes, then it should be relatively low cost since the game doesn't use a realtime communication (It's actually HTTPS based lol).


The pokemon models are loaded over the network? Whatever for? Seems like a big waste of precious data


So you can add/update them without doing an .ipa/.apk update ? Seems really convenient, and a one time deal if it's cached.


Last year I developed a server emulator for the game - On the server's first run, it cloned all models from the official servers to a local folder. The download urls and the models itself were encrypted, also their urls got disabled after ~24 hours. Other informations like how big a model gets displayed inside the game world was declared inside a GAME_MASTER file. When connecting to the custom server, the client asks for the GAME_MASTER file and the models as needed. It was pretty fun to edit these and send the players custom models like a GIANT snorlax or a red pikachu. I'm not 100% sure, but I think the GAME_MASTER file even contained a "model signature" to trigger a forced model update, so it's possible to overwrite already cached models. In the first weeks Niantic broke the model of Starmie (it turned full black) and I'm pretty sure they fixed it without a full game update. You can checkout the emulator project here: https://github.com/maierfelix/POGOserver


I haven't actually measured anything, I know when it first came out there was some concern about people possibly hitting bandwidth caps. I think part of the issue may be overhead - it's regularly communicating back to the servers with current location, and getting back a list of coordinates for stops within a certain distance (with a bit of other data), coordinates and other data for Pokemon spawns, etc. It's also using Google Maps data to draw a map of the terrain including roads, water and building outlines, so all of that is also something that changes as you move around. Finally, people are also supposed to "spin" Pokestops and gyms, each of which also has a photo of the piece of artwork, etc. that the stop is located at. Along with the map data, those photos are not preloaded in the app so they have to be downloaded at least once. I don't know how much caching of those images is done - hopefully some, but if they're doing little or no caching then that's yet another chunk of data to pull down.

Looking at the screen and a map, it looks like the game renders an outline for every structure within 600-650 meters of me, so for an area with a lot of structures that could be a fair amount of downloaded data depending on how good their caching is. If their download process isn't good at handling missing data or transmission errors, it could also be pretty vulnerable to problems on a congested network.


Slight correction, as far as I know they were using OSM, not Google Maps.


They use Google for the base map everywhere except South Korea, where they do use OpenStreetMap.

There are rumors that some game features (biomes and such) are influenced by features in OpenStreetMap, but no clear statement from Niantic, just a bunch of weak correlations (like OSM having a path where a bunch of people go and try to play the game...).


There was a big problem where people were manipulating biomes in the game by messing with OSM, so unless there's some sharing going on with Google..?


Base map vs hidden game features. Outside of Korea the base map is 100% Google.

The hidden game features, who knows. Lots of people are very convinced that OpenStreetMap features have an impact. I haven't really seen anything all that convincing, just lots of correlations that could be have other explanations (game analytics, other datasets, etc).


how many people are live streaming a concert at a given moment? maybe 5-10 on periscope, in addition to maybe 2x that number on facebook. definitely not 15k.


>When Niantic’s John Hanke took the stage, he was greeted by an audience a few thousand deep, many of them chanting “FIX YOUR GAME” or “WE CAN’T PLAY!”

Can you imagine how stressful this would be? It's one thing to get disappointing feedback data on bugs, UX, etc. Standing in front of thousands of angry people screaming at you to fix your software this instant? Yikes!!


I remember Google Cloud was doing a ton of press about how Niantic was using their platform for Pokemon Go. I wonder if that will backfire for them now.


These are completely unrelated, no? One is about scaling the servers and stuff behind the scenes, this thing is about cellular data networks failing and dying.


Humans just aren't made for that sort of feedback. I couldn't imagine 10 or 100 people yelling genuine critical feedback like this at me -- let alone thousands. I imagine he didn't have a great day.


Even at big music festivals, mobile networks are usually totally overloaded just by the sheer amount of people logged in at the same time, without even doing anything. Even recieving an SMS can take minutes. I don't assume they would do an event like this without letting providers know in advance to upgrade capacity, but it seems like this.


> Even at big music festivals, mobile networks are usually totally overloaded just by the sheer amount of people logged in at the same time, without even doing anything

In Germany, ordinary political demonstrations (from my experience, everything from 2k to 80k is enough) are sufficient for O2 service to totally break down, even in the center of major cities.

The Oktoberfest in Munich has similar problems despite operators putting up dozens of small BTS, but then again, it's up to 350.000 people concentrated on maybe 200.000 m² and it's really hard to get them all serviced good enough to live-stream their Brathendl and beer Mass to Instagram...


They have this figured out in Austin for huge events such as ACL. Temporary towers erected in the correct locations can easily handle the extra load with proper planning. I went two years ago and had absolutely no issue with LTE or service


The better (read: twice as expensive) provider in Germany does that as well and I rarely have a problem even in crowded events (haven't tried at Oktoberfest). To be precise for well-known large events all networks erect temporary towers but inferior ones cheap out on capacity. On the other hand, Oktoberfest has an order of magnitude more visitors than ACL in a very small area. Probably physics will limit what you can do.


Yeah. A single beer tent can pack well over 10k people - the optimal solution would be a set of antennas per tent but people don't like seeing antennas, so the operators put their COWs in the "backstage" areas, thus limiting range.


I've always wondered why cellular is affected by this, anyone have a link?


Disclaimer: I worked at an OEM phone company.

Cell towers get saturated pretty easily by crowds. Network providers probably did not account for the crowd, or even if they did, it wasn't enough.

I would assume a company like Niantic asked network providers to install additional cell towers beforehand, so my best guess is they tried their best but still failed.

The technical details are quite complex, and you would need to read up on GSM,UMTS,LTE,TDMA,CDMA, but the bottom-line is, the overall technology isn't good enough yet.


There's a limited amount of "space" in the air, ie not everyone can be transmitting at the same time. This is fixed using a few different techniques depending on the cell network but for GSM something called TDMA (Time division multiple access) is used. This basically gives your phone a certain amount of time to broadcast on before it should leave the airwaves for someone else to transmit on. Too many people in one location means not enough time for everyone to transmit.


This is the main reason yes. No amount of backhaul can change this.

There are 2 solutions to this problem.

1) Distributing a web of much smaller cells, lower power so much less devices are connected per node.

2) DIDO pCell technology developed by Artemis. All antennas transmit and the waveforms from each broadcast interfere around each receiver to create a valid signal for that signal device.

Both systems require loads of calculations by the broadcasters to either manage handoffs from each mini cell, or to calalate valid distributed broadcasts per each receiver.

This type of event would have been a great opportunity for Artemis to show off their tech.


That's not really quite accurate on modern networks. Most of them use CDMA which allows many devices to transmit/receive at the same time, assuming that they can modulate their power with respect to distance from the tower/etc(which is a requirement for the "xor" to cancel out).

That said there's still a bunch of other stuff you have to sort out if you want to support that many devices in a small space.


> Most of them use CDMA

Really? I thought CDMA was something only Verizon used for their network. I tried looking into it a bit prior to writing the original answer but I couldn't find anything other than a mention on the TDMA Wiki [1] which has a [citation needed]. Do 3G/4G networks across the world use CDMA as well? Because I know that in Sweden carriers are buying up large chunks of frequency bands to be able to increase 4G coverage, that shouldn't be needed with CDMA, right?

[1]: https://en.wikipedia.org/wiki/Time-division_multiple_access


"Many" is still not thousands or even hundreds.


The technologies used and underlying principles have changed enormously from 2G/GSM to 3G/WCDMA to 4G/LTE/OFDMA but a lot of it comes down to questions of how many individual connections each tower can handle and the minimum amount of airtime communicating with each device requires. At one point (still?) there are also questions of how strongly each device and tower must be broadcasting to keep the received signal at each end within acceptable levels.

Edit: Expanding on this a little while I still have time to edit, and all of this is from a non-telecom person so it's likely I have errors in here. Still, this should provide a starting point for the history and what to look for to get more current/accurate info.

Most 2G was GSM which used TDMA (Time Division) - basically, each device connecting to the tower was assigned a "slice" of the available time and had to broadcast within that time slice. When the tower ran out of slices, some devices simply couldn't connect to it. Sprint and Verizon were the exceptions, they used CDMA which allowed many devices to broadcast at the same time and was able to decode data for/from each device based on the Code Division used to encode it. CDMA towers "breathed" which meant that their coverage area could vary based on how many devices were connected because the devices and towers had to keep broadcasts within particular power levels at the endpoints. CDMA allowed more devices, TDMA could allow larger coverage from each tower by reducing the number of time slices (which also reduced the number of devices that could connect). Notably, CDMA allowed devices to talk to multiple towers at once with reconciliation being handled by a smarter backend, while TDMA required more explicit handoffs between towers. Way back when this was why on GSM your call would simply drop, but on CDMA you'd simply develop a severe case of robot voice.

3G almost all moved to WCDMA, which I believe had many of the same advantages in density as CDMA.

4G moved to somethind different (OFDMA?), which I haven't read enough about to really understand but which is technically quite different from WCDMA.

Most LTE uses yet another protocol. On T-Mobile, this was noticeable when T-Mo "refarmed" its network, converting a bunch of frequency bands from 4G to the newer higher-capacity but incompatible LTE. I know a few people who had 4G phones on T-Mobile and were offered free replacement phones because T-Mobile had basically yanked the actual 4G network out from under them.

Basically, there's a lot out there, it has changed a LOT over the last 10-15 years, and many of the changes were actually complete replacements rather than incremental upgrades.


I believe it's the backhaul capacity that's the problem. Even assuming "only" 1 MBit/s constant traffic per customer, an uplink of 1 GBit/s can only handle a thousand clients tops.

Cheap-ish providers are worse - they usually don't run fiber to every tower because laying fiber is expensive, but instead only to one tower which then distributes uplink via directional microwave to nearby towers. Once that link gets saturated or has issues, the whole area is screwed.

Providers like Deutsche Telekom (or other former monopolists) usually have it easier as they have huuuge amounts of fiber and especially conduits in the ground, which means it's easier for them to wire up even small towers with real dedicated fiber uplink.


This is pretty off base. Backhaul is the "easy" part here. It's easy to get way more bandwidth available on a microwave link than the bandwidth available to clients.


What's to wonder about? Each tower can only handle a certain number of people, and the cell phone companies don't put the number of towers needed for an event like this in permanently.


Anyone who has ever tried to send a photo at Soldier Field during any large event could have predicted the crippling of the network that happened here. Maybe I missed it in the article, but did the event organizers not set up an ad-hoc network for this?


What they should have done is made arrangements to have COWs on site from the major providers.

https://en.wikipedia.org/wiki/Mobile_cell_sites#Cell_on_whee...


How do networks manage to have sufficient backbone for those mobile towers?

What good are those towers if thousand people can connect but cannot use any internet because their backbone cable is so slow. I once was at a festival in Germany where they had such measures but lacked backbones (I had full signal strength in a remote array but the internet was unusable slow).

Not every town has fiber optic to support such a surge of people.


Many of them use point to point microwave (if the provider has a good microwave network in the city).

Those are good for a couple of gbps each and can go anywhere in a city that's got line of sight.


Sears Tower (Now known as Willis Tower) is a major microwave termination point in Chicago; great LOS from most of downtown.


Your situation doesn't tell us anything about whether the congestion was the backhaul or not. IMHO, their backhaul most likely wasn't the issue. You can have 100,000 people all right next to a cell tower and while they won't have any usable service, they'll still see 5 bars.


Yeah, but it’s Chicago. A massive city, not a town.


Agreed, this should have been totally predictable based on Bears games, lolla, etc.


Niantic organized a lot of big Ingress events for the last 3 years so they _must_ have known about these problems but somehow they still managed to fuck it up.


The Ingress events have worked as long as players are distributed around a city. When everybody was in a single location (for example a shard), stuff would fail


This is a situation where a mesh network topology (like Firechat) would be necessary.


Mesh is unfortunate for these situations; the limiting factor at these scales isn't infrastructure size, but EM spectrum. The accepted solution is mandating players go onto WiFi, where you can get them to much lower emitted power by deploying enough APs, close enough to any player location, that no clients need to shout to get their traffic heard.


Sigh. Except in battlefield and deep off grid situations mesh is always a bad idea.


Or Disneyland.


Are there any Niantic employees active here on HN? As an engineer, I would be very interested in hearing some first hand accounts of some of the technical problems Niantic's engineers have had to deal with, not necessarily just for the Go Fest, but for the game in general. To the extent that they're allowed to talk about it, it would be very interesting to get an AMA style take on some of the challenges.


As the lead engineer on an up coming game of a similar scale, YES! This would be so awesome!


Niantec just posted an update[0]. These are the steps they will be taking:

All registered attendees will soon receive an email with instructions on how to receive a full refund for the cost of their ticket. These instructions will be sent to the email addresses associated with your Pokémon GO account.

All registered attendees will receive $100 in PokéCoins in their Pokémon GO account.

Special Pokémon, Eggs, and check-in PokéStops appearing during Pokémon GO Fest have had their range increased to a two mile radius surrounding Grant Park through Monday morning, July 24. These Pokémon and Eggs will only be visible to Pokémon GO Fest attendees who validated the QR code they received when they entered Pokémon GO Fest. Attendees who were unable to validate their QR code during the event can do so through the special PokéStops through Monday morning.

All registered attendees will have the Legendary Pokémon, Lugia, added to their account.

[0]http://pokemongolive.com/en/post/pokemongofestupdate


> All registered attendees will have the Legendary Pokémon, Lugia, added to their account.

Lugia? That’s some damage control right there.


Beg for forgiveness and give out free stuff, that generally makes people much happier :)


I guess that's no suprise? Downtime with every single event som far, so why should this one be anything different?

WiFi is a thing and should he able to handle it just fine. Heck, they could even block anything but PGO and get away with low bandwidth too.

When voulenteers can manage to get WiFi working for thousands of nerds at indoor LANs... Let's just say that Niantic should be able to avoid this with a little bit of planning.

This is the company running one of the most unstable and buggy games I have ever played, so I would expect no less. At least the game is very fun for a lot of people... When it works.


Don't large lans use.. LAN ? (Ethernet). I could be wrong, but as many desktops still don't have wifi and you know where people are and it's so much more stable large scale.. it would make sense


Ofc ethernet is used, but WiFi is often used for phones and additional devices like game consoles as it's mostly 1 port/person. The gathering and Dreamhack are two good examples of events with WiFi scaling to many thousands of connected devices without much problems. More or less every AP provider has solutions for this.


This game has been so much of a trainwreck, I don't see why Game Freaks licensed Niantic to use their IP.


Because ingress was an enormous success and a cultural phenomenon in Japan.


Not only in Japan but also in Germany and the US


Niantic seems to have expertise in Mobile Gaming from ingress. Which other company has?

If your comment is based on this event, it seems unfair: Who else has successfully done an event with similar bandwidth requirements before? Also, my understanding is that the cellular network is the problem, but you might correct me if I'm wrong.

Besides, Blizzard had spectacular problems in the past with WOW launch events, simply because no one had ever done something of that /scale/ before.


It sounds like they planned an event for 20k people without telling mobile networks they are doing it. For festivals mobile networks usually deploy extra capacity to handle larger than usual number of people. It sounds like this could be handled by large number of local WiFi spots as well.


Yeah, I would have gone with the wifi option as well if I was setting up an event like this.


How do you just "deploy extra capacity"? Do they install new antennas just for an event?


Deployment of cell sites at events like stadiums for sporting events and outdoor music festivals is pretty common for years and years. Unless you're a hermit you almost have certainly seen a COW at one point or another. Cf Wikipedia mobile cell sites, especially COW.


I've seen them around alright, just not realized how common or easy-to-set-up they might be for an event like this, or how they work from an inter-provider standpoint. Do they not cost a whole ton for the organization setting up the event, and wouldn't every provider (T-Mobile, AT&T, etc.) be required to set one up? It seems like such a massive headache and cost for something like a Pokemon-catching crowd that might not even generate much revenue(?) that I'm confused how everyone is saying "just deploy extra capacity". Like do providers just follow crowded events for free and put antennas wherever people are as if it's no big deal?


Or the mobile networks fucked up and underestimated the network load.


20k is actually not that many people as far as these things go. Over 50k in that confined stadium area and you really start hitting some hard limits with current cell tech. 20k should have been no problem with careful planning.


>20k is actually not that many people as far as these things go

20k people in close proximity uploading and downloading constantly without breaks? What specific event is that comparable to? Music festivals don't have all attendees constantly using their data network.


The point is, there are many places in the world that draw large crowds into a tiny area on a weekly basis, and the cell networks will fall over and die without special measures.

Did they do anything at all to anticipate that load? It seems not. "We tried and failed" is understandable, but simply inviting 20k people and expecting them to play a mobile game is monumentally foolish.


Has it? I read recently that they still had around 70m MAU.


The integration of ltheir location api with AR was reasonably well done at the time. I agree though, Niantic has no business making games. I try to imagine a world in which they license their AR SDK


You think the AR was well done? Serious question, not trying to flame or anything, but I disagree. For me, the AR is pretty much just overlaying a Pokemon graphic on top of camera footage, there's very little interaction to the camera input.

In fact, after playing for a bit, I turned off the AR because of loading times.


You're talking more about the graphics than the actual AR experience. It was tied to the real world as mentioned in the sibling comment. I imagine they accumulated a lot of data from Ingress. More often than not, the gyms and pokemon hotspots were located at some point of interest and/or place that had heavy foot traffic, increasing the probability that you'd run into other players.

It was a global sensation for a reason and despite it's extensive flaws, it was one of the funnest gaming experiences I've ever had. Not because of what happened in the game, which admittedly was beyond terrible at every level from core gameplay to polish and UI (I've seen and worked on better better at 24 hr gamejams. I have no idea why Nintendo didn't step in and take over development after the craze happened) The gyms closest to me were located in a popular shopping mall that had no shortage of players. It led to a few new meatspace friends for me.

There was something meta-magical about seeing someone walking down the street glued to their phone while you're doing the same, both of you realizing that you're trying to catch some rare pokemon. Often enough, it led to some friendly social encounter, providing just a glimpse of what AR gaming could be.

It also inspired the dev community to make all sorts of apps and mods that addressed many of the games shortcomings.I left after Niantic dismissed the dev community because the game could've been so much more with the power of the crowd. Instead, Niantic preferred to keep the game shitty so that they could extract profits from the parents of 8 year olds.

I was doubly disappointed when it was revealed that the Switch, a mobile game console, would have no GPS chip. Nintendo could have pioneered AR gameplay like they did with 3D gaming. Maybe that's expecting too much from a company that doesn't even get online gameplay.

It only lasted two weeks, but that's the standard shelf life of most games that aren't endless grindfests.


Shameless plug: have you heard about Terra Mango at all (it's in case beta)? I'm a dev on the game and we're trying to figure out how to connect with more people with exactly your sentiment about PoGO and Niantic.


Thats not the AR tho, the true AR aspect is tying it up with the real world.


The event looked interesting and the game is a great concept. Sorry to hear about this fail as well.


The weather in Chicago since 5pm till early this morning has been filled with storms. Storms were also threatening this afternoon, but surprisingly they didn't come. I had to cancel a beach meetup that I was looking forward do over this.


Could they have used a mesh network to help load balance connections to WiFi APs?


The silver lining is that all these people would would only interact with each other virtually then had a perfect opportunity to talk to each other face-to-face, fwiw.


What surprised me in this article is that people still gather to play Pokemon Go.

I personally never got into the game, while my cousin was totally into it, but where I live (which is a medium city for Europe), the fad has totally passed in less than 6 months after launch.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: