Hacker News new | past | comments | ask | show | jobs | submit | rizwan's comments login

I attended SIGGRAPH 2005, and there was a group in the booths that had some headset you would put on that would alter your balance to make you walk in different directions. They had a video playing if someone walking with this device and blind folds on, and someone with a joystick could turn them left and right.

Looked it up and it appears to be a similar type of technology: Galvanic Vestibular Stimulation:

https://dl.acm.org/citation.cfm?id=1187315


A while back they added it again, so you can use + in your searches now.


> If what you do doesn't make an impact for customers and/or other teams in the organization, was it really worth doing?

The problem is that, at review time at Google, you have to be able to "quantify" the impact. Many types of impact are quantifiable (e.g. "Made server request scale from 100 query-per-second to 1,300 qps", "reduced code size by 30%", etc.).

It's much harder to measure, say, the impact of a refactor where you made the code easier to reason about and more maintainable, so that future work can be done on it more easily.

I witnessed the same thing at Google; I worked on a project that everyone joked only existed because the person who wrote it wanted promo, and the best way to get it was to design a very complex system, and convince others to adopt it. (He did get it, and promptly switched teams.)

Some things have been made better, though. I've heard that going from L4 → L5 now involves much more influence from your manager, since they would know and, without quantifying something like a refactor, can speak to the positive impact you had in a project.


> It's much harder to measure, say, the impact of a refactor where you made the code easier to reason about and more maintainable, so that future work can be done on it more easily.

I've also seen refactors that just made life difficult for everyone else with constant non-functional changes. In the end there is a lot of fashion in programming, and while some refactors are worthwhile most are not in my experience.

The refactor is supposed to provide payoff in the future, but what normally happens is fashion changes and someone new comes by and says "this code is shit" and starts the process over. The supposed benefits never accrue.


You quantified what you said is hard to quantify.

Measure commits/authors before and after a refactor.


The problem is that metric, like any metric, is both easy to game _and_ can provide misleading information.

Measuring number of commits? Create fewer, larger commits. Measuring commit size? Pull in more third-party libraries, even where it does't make sense. Author count? Add more/less documentation and recruit or inhibit new devs depending on what your goal is.

Not to mention the number of commits/authors before and after an arbitrary point in time might conflate a successful growing project with a project in a death spiral being passed around from group to group.

It's a good idea, but in practice simple metrics like this often (but not always) devolve into prime examples of Goodhart's law.


Ok, then find another way to measure developer productivity, or reliability in production, or customer features delivered.

If you can’t find a measurable benefit to a refactoring (or anything else, really) then maybe it was not worth doing in the first place.


In science, measuring things until you find a benefit is called p-hacking. Every extra test you do that splits your data along a different dimension, is another independent opportunity for "random chance" to look like positive signal.

There is no programming project in existence with enough developers working on it, that developer-productivity data derived from a change to it would not be considered "underpowered" for the sake of proving anything.


The obsession with measuring is hilarious. There are plenty of things in life (and jobs) that aren't measurable and are worth doing. Probably all of the important things are actually unmeasurable. Think about it this way, if its so easy you can measure it, it probably isn't very important in the grand scheme of things.


No metric can escape gaming when you apply it to rational actors (Campbell's Law / Goodhart's Law). Blind devotion to metrics is just as bad as no metrics at all.


I was just yesterday discussing the opportunity cost of infrastructure changes, as a new team member was bemoaning our out of date patterns...

A high impact infra change will often inconvenience dozens of people and distract from feature work... You know, the shit people actually care about... (this is analogous to how "Twitter, but written in Golang" appeals to approximately no one.)


> find another way to measure developer productivity

And solve the halting problem while you're at it


Normalizing commits number per author is not difficult.

https://www.cs.purdue.edu/homes/lsi/sigir04-cf-norm.pdf

Goodhart’s law is not applicable to scientific management because metrics have different purpose.


Goodhart's law is entirely applicable to management (adding scientific in front doesn't actually mean anything). That is one of the prime areas of applicability. People change their behavior to increase a metric at the cost of decreasing other more important things.


In the context of sales, you can have a conversation about perseverance and not taking no for an answer. However:

> At the end of that example, Brandon laughed and said, “I was about to say something.” He paused, and then went on to say, “No doesn’t necessarily mean no.”

Brandon _changed_ the context into something offensive and then made the joke. This was an attempt at rape joke. He even prepped the audience for it by laughing and saying, "I wasn't going to say this, but..."


This is the crucial detail here. In a vacuum this is innocuous, but Beck specifically went and gave a nod to the context he wanted you to see it in, and that context was rape.


Glad it's here, so I no longer have to Airplay from an iOS device. But can we talk about the UI? It seems Amazon's dumped an entire web renderer into the app (https://twitter.com/stroughtonsmith/status/93857361817446400...), and loading their "smart" TV UI.

It ignores the tvOS human interface guidelines (https://developer.apple.com/tvos/human-interface-guidelines/...), discarding all the accessibility features and the focus model.

I get that, to Amazon, the Prime Video app on Apple TV is probably not worth spending any time and effort on. But it's unfortunate for those of us that are paying the strategy tax and getting a "smart" tv app designed for low-powered CPUs.


AIM did go mobile. It was in the iPhone App Store on day 1. Push notifications didn’t even exist back then. When push notifications were announced by Apple in June 2009 as part of iPhone OS 3.0, AIM was the “partner” they used to load-test it during WWDC.

Source: I worked on AIM for iPhone.

IMO AIM struggled because: - it was a highly tuned, specialized C backend, and it never migrated to something that could be improved easily. - backend technical challenges (as well as legal issues) made storing chat history very difficult - Hardly any info was collected about AIM screennames, so it was hard to build a social graph from it - AIM registration was the same as AOL signup, and that registration process was very cumbersome, imo. - Most AIM accounts had no email address associated with them, so it was impossible to do password resets, for all the locked up AIM screennames.

As a client developer on AIM, it was hard to make a material improvement to AIM, though we certainly did try


I get the meaning of the "Cobra Effect", but I'm not understanding it here w.r.t. disabling paste on password fields.


I think the way they framed The Offer is a good one. He makes it clear that he wants you to stay if you want to stay, and is giving you a way out if you're not passionate or happy in your job anymore.

Many years ago they did that at AOL (a voluntary severance package), and IIRC I don't think it came across as that sincere.


To further expand on Apple's recommendation to not make Swift libraries just yet — It's also an matter of compatibility. You could certainly make a Swift framework, but given that Swift as a language is changing every 6 months or so right now, it doesn't make sense to make a framework using Swift.

You could easily hold up another project if you haven't changed your Swift code to the latest, or the opposite is true as well: Your Swift framework is up-to-date with the latest language, but the app using your framework is not, etc.


In addition to some constraints on the type of product (i.e. watch apps have to be focused, super-quick interactions), there's another reason we haven't seen a lot of app, imo:

This year was an unusually crazy year for iOS developers. Apple released: - watchOS 1, with a slow and clunky experience - iOS 9, with adaptive multitasking - watchOS 2, requiring a rewrite of the watch app – tvOS, a whole new platform, with more easily translatable app experiences.

Additionally, with the "2-versions behind" strategy for Xcode, a lot of app developers (myself included) dropped iOS 7 support. This was actually one of the most complex transitions ever. (So many backwards incompatible changes (size classes, presentation controllers, push notifications).)

So something had to give. For me, and for many others, I'd say it was probably the watch.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: