Hacker News new | past | comments | ask | show | jobs | submit | lilbobbytables's comments login

Wegmans is the best. They do show it in their home - Rochester, NY area.

Livingston County, NY, just south of Rochester, reported “too little data”, so perhaps their metrics are misreported in general for Wegmans to have shown up in their analysis.

Do you get the benefits of separation by using separate data stores as well, or are most fed from a common database?


Most data is stored in the same database server instance. There is a database access service that with seperate modules for each database, by convention each of these modules has exclusive access to it's tables with the exception of a special module that performs cross-database joins, needed for performance with some reporting tasks. Other services don't talk directly to the database, they go through the database service. Some services use their own data stores in a variety of formats.


Is this a problem? Do you find yourself waving at your webcam much?


I find that my hand goes between my webcam and my face reasonably often.

It's still a brilliant idea and I love it, regardless of its limitations.


That isn't true at all. Trunk based is perfect for this because you have the clean release branches from the trunk.

I've worked in large companyies very successfully using TBD and releasing quarterly to enterprise customers using our software on their systems.


> I've worked in large companyies very successfully using TBD and releasing quarterly to enterprise customers using our software on their systems.

care to elaborate? curious about those companies and the software you are talking about.

i totally see the trunk based process work for software that doesn't need to maintain multiple versions as the original commenter said.

but for multi-versioned software you can't afford to have a single trunk and let your developers have a go at it. i went and checked the repos for some of the multi-versioned software that i use that are open-source.

well confirmed.


Why do you need to maintain multiple versions? You should only need to maintain hotfixes on the last release branch. Once the next release is ready everybody who can update should take that release, any older releases become unsupported.

From a customer point of view a hotfix vs new release should be no difference. You just have to ensure that all releases are interface-compatible with the previous one. The cost of this is well worth it compared to the complexity of juggling multiple branches in parallel.


I think I understand your point, but thinking about releasing a hotfix over an old version is a way of maintaining multiple versions... I work once with a team supporting multiple versions of a really complex app(in the sense of a lot of features and DB entities). They support his clients (on-premises) with eventual hotfixes, one service-pack and one version a year.

Some times they were working on changes on a release before a hotfix was addressed, and with cherry-pick's like operations, they took the changes of the hotfixes and applied to the service-pack and new versions with less effort than before they used the model.

With a dev team of 10 programmers, 20 on IT support and hundreds of client's deployments they build a profitable business (subscription-based and in-site support) where the branching model helps a lot.

They spend less time managing the releases, switching context for priority support and dealing with more changes in less time.


At the top of this thread is a link to a clear explanation in a post by umvi. Very easy to understand.


I like my aeropress, and used to use it with beans from my local roasters.

But as of late I've switched back to just making regular ol' drip coffee in a cheap coffee maker with Chock full 'o nuts coffee.

As easy as an aeropress is compared to other quality brewing methods, I still find the drip machine much easier -- load up the filter and water, walk away, come back and pour a cup. (Plus, I can't seem to get my aeropress not to also pee down the side of the mug as I press it down)

Somehow, my coffee experience hasn't degraded at all. No, it doesn't taste the same, but to me it's still totally fine, and almost always a better coffee than the drip I get from most coffee shops.

Amazing coffee in an aeropress is just a white whale for me.


> and the docs linked from the email are not updated yet.

That about sums up most things Google does for developers.


I thought the standard advice for Google stuff was "there are always two systems - the undocumented one, and the deprecated one"


What I most frequently heard was "There are two solutions: the deprecated one, and the one that doesn't work yet."


That's a wonderful quote that applies to many companies. (I think that will resonate with the Unity developer community right now.)


Can you provide some examples? Need to push for changes like this myself.


As a project manager who runs and attends a lot of meetings :) I recommend these two books about planning and running effective meetings:

* "Effective Meeting Skills" by Marion Haynes is a short workbook from 1988. It looks dated, but the material is still relevant and a quick read. https://www.amazon.com/dp/0931961335/

* "The Surprising Science of Meetings: How You Can Lead Your Team to Peak Performance" by Steven Rogelberg. This 2019 book is great and includes useful findings from real studies of meetings. https://www.amazon.com/dp/0190689218/

This podcast interview with the author is a good introduction: https://www.gayleallen.net/cm-127-steven-rogelberg-on-making...


AirBnb and friends could not exist without the basic tech. they're tech doesn't need to be groundbreaking, but in the very least they're an e-commerce like platform of sorts.

At WeWork tech is so secondary to their requirements. Sure, they can make it more and more required -- just like fridges do these days. But for WeWork the tech is about as necessary for you to derive value as a touchscreen on a fridge is necessary for you get get value from the fridge. _Just because it relies on technology doesn't make it important to it_


"Twitter has an algorithm that creates harassment all by itself"

What am I missing here? There was no harassment of any sort. Alternative headlines could have been:

"Twitter has an algorithm that helps you gain more followers"

"Twitter has an algorithm that helps you drive awareness"

"Twitter has an algorithm that helps you get more twitter followers for your cause or business"

"Twitter has an algorithm that expands your social impact from beyond your sphere."

---

In other news: public posts on public site go.... public.


The missing piece which the Twitter thread author only touched on is that how a tweet is received by a reader depends a lot on whether or not they come from similar communities and have similar context to the author. By surfacing tweets to people that the author doesn't know at all, it's likely the responses will be more negative in general.

Anyone with a large twitter following knows roughly what the makeup of their follower base is, and they compose tweets accordingly. While always necessary to some extent, it's usually hard to contextualize every single tweet as if it could be read by anyone, so it often isn't done.

As a silly contrived example, lets say I am a software developer that focuses on operating system performance and I tweet something like "I'm working on an algorithm to make killing children an order of magnitude more efficient". (note to real twitter users: never tweet that)

My followers know I'm talking about killing child _processes_ on a computer. So they reply things like "oh, that would be great, it would make this one shell script I have a lot faster to execute" or maybe even "personally I'd rather you encouraged users to use threads rather than forking lots of processes". There might be a heated discussion, but it will be with a HUGE shared context of information.

Now the Twitter algorithm picks it up, and the tweet gets seen by lots of people who don't know anything at all about operating systems. They are, understandably, completely appalled. They start responding with anger. Threats, abuse, etc.

So, Twitter changing the dynamic from "your tweets will primarily be seen by your followers" to "your tweets will frequently be seen by your followers followers" can actually have a big impact on the platform. It will at minimum take some adjustment. Operating with the assumption of one dynamic when there is in fact the other will be...painful.


I get what you are saying, but isn't this what everyone was screaming for years ago when the filter bubble terminology came up? Now we are criticizing networks for showing things outside of our filter bubbles? You can't have it both ways.


Yeah, this definitely is a way to break the filter bubble.

But thinking about it a bit more, it might be one of the worst ways to do so.

For example, assuming roughly that both favorites and retweets represent general agreement, using those mechanisms to surface new tweets to people makes sense. If someone you follow (and presumably respect) quote retweets someone you don't follow with "Yes this!" or something similar, then you're already primed to agree with the person you follow.

But, often at least, replying and not faving/retweeting could very well bais for DISagreement. Now instead you're going to see someone you follow and respect arguing about something, and you're primed to agree with them, and potentially pile on to the original tweet author even though you might not have cared about the topic a minute ago.

Twitter ALREADY has a way to signal that you want all your followers to see a tweet you saw: retweet. And even showing your followers things you favorited at least means they'll see things you probably like. But it seems there's at least a reasonable argument that showing your replies to your followers is setting up a situation where pile-ons to the original tweet are likely.


I guess the point is that Twitter could easily tone down pile ons by noticing that a tweet is generating many more replies than likes. Then reduce display of that tweet instead of boosting it to non-followers.

Perhaps not for blue checkmarks (they've declared themselves central to the public debate), but for average users Twitter should try to calm down pile ons.


Most of those problems would go away if they a) eliminated the gamification (displaying numbers of replies, retweets, and likes) and b) required textual comments of a particular length.

But then so would the engagement and ad revenue.


That doesn't sound like a solid indicator of an issue. Two friends could be having a back and forth discussion with no harassment or conflict. You'd end up with 25+ replies and 1 like.


What's the point of locating in Silicon Valley and hiring the smartest programmers in the world if you can't figure out an algorithm to make hateful posts not show up as often in someone's feed?

I doubt it's because they can't. The more likely answer is they don't want to.


It's actually a hard problem, similar to porn detection without using humans (see: https://en.wikipedia.org/wiki/I_know_it_when_I_see_it). Blocking purely based on keywords or Bayesian filtering usually paints too broad a stroke and ends up limiting well-intended free speech (I once had a comment blocked for arguing AGAINST racism!). It's similar to the "blocking all mention of sex also blocks sex education" problem. It seems to take a fully-fleshed-out intelligence to grasp the true meaning behind even something as innocuous-looking as a written sentence.

Your assumption that people more intelligent than you "should have figured this out by now" belies the very problem- no one has yet come up with a good automated solution for this. If YOU do, you'll be a millionaire.


Again, I disagree. Twitter came up with a way to make some posts more widely shown, and you're trying to tell me they don't have a way to make some posts less widely shown? As someone else said, if there are a lot of comments and few likes, don't put it in the trending feed. That's one solution for free, and I don't even work for Twitter. If it's two people having a conversation back and forth, the broader Twitter audience doesn't need to see it. It's not censored, it's not hidden, it's just not broadcast either.

People have become millionaires, billionaires even, for the exact opposite of what you say. You become rich by making sure controversial content is spread as far and wide as possible, because hatred and fear sell as entertainment. People get addicted to it. You don't become rich by filtering out hateful content, you become rich by enabling it and spreading it because that's what people want (as long as they're not the target).


If you limit yourself merely to detecting abusive tweets, perhaps it is hard. But there are plenty of ways to adjust the way the social dynamics work that would decrease this kind of behavior but, I believe the argument goes, most of those would also decrease _engagement_.

The real problem is the incentives, both for Twitter and for people interacting on twitter. The solution is probably _social_ rather than technical, but as long as Twitter wants to keep your eyeballs on their site for as long as possible (so they can sell ads or whatever to advertisers) a whole host of solutions are going to be verboten.

By way of example, Hackernews literally has a feature to just lock you out of the site if you are using it more than you want to. That is great for us, the users. But twitter would never do such a thing.


I would imagine the issue is certainly because they can't. What is hateful to you is charming and encouraging to someone else. Social norms and cultural differences are gigantic. Look at the recent controversy with the conservative guy on YouTube who referred to a reported from Vox as their 'queer Latino reporter' and it was seen as hate speech... despite the Vox reporter openly and frequently labelling themselves as Voxs queer Latino reporter. How is a computer supposed to interpret that? How is it supposed to know that when person A says something and when person B says the exact same words, referring to the exact same subject, that the greater context of the speakers background political affiliations and those of their audience actually determine the 'meaning' behind the statement, not the statement itself?

This is not an easy problem, and it does no one any good to pretend that it is. Tackling the issue also requires those considering it to consider other social situations. Is someone supporting equal treatment of women in Saudia Arabia practicing hate speech against the conservative ruling party? If we'd had systems that let us actively regulate speech in the way we can now, would it have been appropriate to block Martin Luther King Jr. because his message was growing civil disobedience and causing families to bicker over race politics? Why are we so damn certain that any argument today will necessarily be decided by a regression rather than a wider acceptance of more progress? Change in human societies is always ugly, always comes at the cost of pain and strife, and on the balance has usually moved us in a forward direction. I can't say the same for censorship. Censorship makes impossible any forward movement, and only serves to leave regressive mindsets to fester and make-believe that they have more support than they actually do.


We're not talking about banning these posts, or hiding them, or censoring them. Just not showing them as widely as they do other posts. It doesn't even need to go as deep as "this is hateful", but rather "this has the potential to be hateful" or giving the author the ability to control how widely the message is being shared.

I see these people here trying to debate solutions like good engineers, but unless they work at Twitter, it's a waste. We can guess all day and come up with a million solutions but when it comes down to it, Twitter absolutely has the ability to control posts that spiral out of control. What they don't have is the desire to do so.


What's the line between censoring and "not showing them as widely as other posts"?


It can be smoothly related with probability of post being undesirable. So if algo thinks it's 50% undesirable simply count it as "half a weight." Or tune this function to be whatever you want. Twitter/etc already makes arbitrary choices about what gets shown.


Not every post gets shown as widely as some do. What's that line? That's where I'd start.


For every mean-spirited hate post that gets promoted, another tweet about knitting is not promoted. Why is censorship only bad if the content is hateful?


"How is it supposed to know that when person A says something and when person B says the exact same words....."

I was about to argue against this but then realised its worse than you suggest.

If I as a white person used the N word to describe a black person I would be labelled a racist, whereas a black person can say it all day long. But even if I black up and say it, its even worse. But then with gender the rules are almost reversed, I can declare myself a woman and expect that to be somewhat respected.

And on the internet no one knows you're a dog, or a transvestite in black face.


We're at a stage in "AI" where we can fool image detection with modifying a single pixel, where Google AI mislables black teenagers as gorillas and bing overlooks child porn, and where self driving cars still self drive into things.

All while "learn to code" is used to harass in some contexts...

But we expect twitter folks to just figure out an algorithm to filter out "hateful" posts, when there isn't even an accepted definition of hateful? The first replies it would filter is all the people telling Trump how bad and evil he and his policies are, while the people who try to actually harass people will find quick and easy ways to game the system, as they always have; that's my prediction of a 'best' case outcome.


That would only be two people. You could factor in # of users.


additionally, there's no real need for technically public discussions to be promoted or made more public, so it's not really a failure state if the algorithm doesn't promote a high reply rate exchange between two users exclusively.


You could simply add the number of distinct users replying. Seems like a pretty simple fix.


I'm still not very convinced that replying without liking is an indicator of negativity. Maybe in most cases, but definitely not all cases.

I don't use the like feature on the website at all and often comment on artwork saying how nice it is or whatever.


They have all the data to be able to make a relatively simple change like this. They don't want to, likely because it "drives engagement".


In general, whenever people say something is relatively simple, and yet this thing has not happened, it can often be a sign that we are missing some hidden complexities.

Not always, but often.


I almost exclusively use the like feature.


The alternative headlines would make sense if that was the consequence of the algorithm, but instead it seems to predominately result in folks who have runaway tweets getting harassed by folks who they don't know. Why would you sugar coat that with some noise about driving awareness?


I can see the poster's point of how it could lead to negativity in some cases, but like you I don't understand what the big revelation is here. Social networks thrive off more people interacting with more posts, so they show posts that have been interacted with a little bit to lots of people hoping they continue to get interacted with. That doesn't really surprise me at all.


The idea of twitter just randomly deciding to boost a low-like-count tweet because it got replies is EXTREMELY WEIRD. Nobody knew the service worked this way. Showing friends' likes in your timeline is not a new deal but in this case there weren't likes, it was just "high engagement". High engagement tweets are often controversial posts from women or conservatives or leftists, and all of those groups are likely to get inflammatory replies that the original poster may not have wanted - you don't have any control over whether your tweet goes viral or gets ratioed.


And if they weren't opaque, gaming them would be even easier.


Security by obscurity ain't security, though. If they weren't opaque, at least we as a society would be better equipped to keep up with the algorithmic exploitation arms race.


I think, unfortunately, that is naive. While security through obscurity ain't security, there is something to be said for obfuscation of a system that people are trying to game.


That's the point, though: if a system is so fragile that anyone with knowledge of its inner workings can game it or otherwise exploit it, then it is not and was never secure (nor can it ever be while it continues to be opaque).

I say this enough to be a broken record, but transparency is a dependency of trust.


This isn't a trusted system though. The data it consumes and indexes, the people who use it, etc. are not trusted.

You are effectively saying poker would be better if everyone's cards were face up.


> This isn't a trusted system though.

Well yeah, obviously, given that it ain't transparent.

> You are effectively saying poker would be better if everyone's cards were face up.

You are effectively saying that an ideal system is one that we'd have to treat like a poker game.

Even assuming the premise here holds true (that a transparent system will be more easily gamed by more people), that'd ultimately be better than the opaque case. The more people who are able to game a system, the less one individual can effectively game it for one's own individual benefit at the expense of everyone else in that system.


I thought of a better way to express this that might make sense to you.

In security, total transparency isn't effective. You want as much transparency as possible, but you need secrets for the system to work (usually passwords/certs/passphrases).

Now, there isn't a password/certs/passphrase in this context, so the secrecy is instead in the model.


Yes, it turns out that transparency isn't always the best thing, even if it is always the best thing for security systems.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: