Hacker Newsnew | past | comments | ask | show | jobs | submit | Rockslide's commentslogin

> What's next We're improving the developer experience in three distinct phases: Framework -> Design -> Cloud

So the "cloud" part is where the enshittification will begin. Been there, done that, switched away from next.js :|


You have that mixed up, storage and ingestion is cheap in BigQuery but processing is exactly were they grab $$$


But you don’t pay per computer second, but per byte scanned. That’s why TFA paid for “terabytes” in their query.


I don't have a lot of sympathy for people using their tools wrong. Using partitioning surely would have prevented this.


I have some sympathy for people who are blindsided by surprising difference between a new tool and their old one.

This post is not eliciting sympathy. They're data consultants, who don't understand a very basic and fundamental aspect of the tool that they're using and recommending. If you're a consultant you have a responsibility to RTFM, and the docs are clear that LIMIT doesn't prune queries in BigQuery. And, also, the interface tells you explicitly how much data you're gonna query before you run it.

This post is also blaming Google rather than accepting their own part in this folly, and event admits the docs are clear on this matter. Cost-control in BigQuery is not difficult. One of the tradeoffs of BQ's design is that it must be configred explicitly, there's no "smart" or "automatic" query pruning, but that also makes it easier to guarantee and predict cost over time as new data arrives.


Yes the whole consultancy situation really is the icing on the cake - as the customer you pay for (alleged) experts in the field and get this as the result...


Wouldn't people who knew their tools perfectly well not even use a cloud service like BigQuery? At the level you expect them to use the tool, they could have created a big query engine themselves. Isn't the whole point of these tools is to make things easier?


Sorry but that's nonsense. Partitioning is THE central cost controlling mechanism in BigQuery and the docs clearly state this. And it's an easy to use feature, so I'm not sure what makes you think using that would be as challenging as building your own query engine.


Querying a partitioned table only works if you also filter on the partition key, so you can still footgun with a LIMIT if not aware of it.


The replies on LinkedIn are are quick to point out everything they did wrong.


Adding a contact form to a statically generated website - that's what I use it for (all email goes to a single predefined account)


Why don’t you use a mailto: URL instead?


it literally says "same project" right there


Thanks. Sorry, just wanted to be completely sure.


> If that is all you're going to put there, then just leave it blank.

Well, I would love to. Unfortunately neither Play Store nor App Store allow you to do that... so "bug fixes and performance improvements" it is, 99% of the time.


Trading has continued hours ago, probably even before this article was even published.

In a first statement, they blamed algo-traders which created (and within milliseconds deleted) a ton of pointless orders, like buying Wirecard for 0,05EUR (which is ~1/100th of the current price). Yes, they explicitly called out orders on Wirecard. So basically... Denial of Service by bot-traders.


the order entry gateways normally have rate limits

and if a member does bad things the exchange can just cut them off

several good incentives for members to exercise control over their client's order flows


Have a source for that statement? All I found were speculations by traders.


It was on German n-tv. Although they now state the opposite here: https://www.n-tv.de/wirtschaft/der_boersen_tag/Arger-ueber-F...


Is it possibly a bug where they meant to buy 1/100 the number of orders at 100x the price?


This is such an easy problem to solve. All orders must stand for at least 1 minute before they can be cancelled. It stops all this crazy nonsense to try to move the price and arbitrage the pennies.


Even without the obvious impacts to market makers and the subsequent price increases to investors, this doesn’t help.

Most things trade with price/time priority. This means that first tie breaker is price but second tie breaker on what fills is who had their order into the market first. This means that market makers layer orders in as early as possible, some strategies I’ve seen layering them weeks or months in advance.

So this proposal would add cost to the market without changing anything about the cancel race.

The real solution to this sort of problem (which I don’t believe actually caused this issue) is the same mechanism you put in any request/reply system, rate limits. Most exchanges have them.


Rate limits, or discrete (in time) instead of continuous trading. An auction every minute or so, for three hours a day. That would be much better.

(When I started in equities in Hong Kong in 2007, the stock market was open for 4 hours a day... 10 to 12:30, two hour lunch break to drink and socialise^W^W^W have important business discussions with clients, then 14:30 to 16:00. Today it's open 5.5 hours a day. EDIT to add: "Reactions from both brokers and the restaurant industry were mixed." (1)

XETRA is open 8.5 hours a day without lunch break, while Frankfurt (FSX) is open 12 hours a day, from 8 to 8. WHY???)

(1) https://en.wikipedia.org/wiki/Hong_Kong_Stock_Exchange#Tradi...


We have these in Europe & the UK now with MiFID II, on several venues

They are called Periodic, or Frequent Batch, auctions.

This paper & earlier works it's based on were very influential in the design and regulatory framework for these: https://academic.oup.com/qje/article/130/4/1547/1916146

Here's a good intro to them: https://www.fca.org.uk/publications/research/periodic-auctio...

Volumes are small relative to the market as a whole, but execution quality is very good.


> WHY???

Shouldn't the question be "why not"?

More APIs than not on this planet are considered business critical and doesn't just close. Stock markets are strange as they must be highly available for a few hours and then can be shut down for the day. Imagine if AWS closed every night.


A lot of Amazon retail does close every night. You can put in a request but it won't be fulfilled and shipped until morning. Getting rid of that is a nice idea, but also "instant fulfillment" is not great either without "instant returns." The idea of "instant trading" seems like it can instantly put the system into a bad state with no recourse to fix it.


AWS doesn't close, which was the example.

The real reason stock markets close is because financial institutions are very interested in keeping it that way. It is lucrative to play pretend stock markets after hours.

(This is a very real question all over Europe right now, and it is interesting to see which institutions talk the loudest about how beneficial a shortening of opening hours would be to just about everyone.)


>WHY???

The exchange wants to make more money, and people want to be able to trade at whatever time they feel like.


It turns out they don't; they really want to meet their benchmarks, and trade where the liquidity is.

One data point - 40% of the French stock exchange's turnover is now in the closing auction. Other markets have a similar trend. Longer trading hours mean thinner liquidity throughout the day, and the direction of travel in Europe is strongly towards shorter trading hours.


The answer is also “foreigners”. Regular working hours for a German exchange can produce some pretty brutal trading hours for say, London. When I worked in finance they started serving breakfast at IIRC 5am, since some trading desks started their day then and ended around noon. Expanding those hours can entice foreigners to trade during non-insane local hours.


> Regular working hours for a German exchange can produce some pretty brutal trading hours for say, London.

Berlin and London time are only off by one hour.

You probably meant the US instead of Berlin/London or there is something going on I'm not aware of?


Yes; I picked the time zones out of thin air, and I chose poorly.


Tradegate is even 08-22...


That's not a solution, it would mean most market makers instantly left your exchange, or just quote really widely (providing a worse spread, so people wishing to trade have to pay a worse price), because holding your orders in the market for a minute is incredibly dangerous given how much the value of the underlying product could move at any time, especially when it's correlated with something traded on another exchange.


Usually what works for such abuses is either to charge extra (tiers depending on volume) to discourage abuse or at lest to cover losses, or throttling the biggest offenders when getting unreasonable peaks in order to keep the system always running.

Both are used in many other sectors. If you have "industrial level" needs you usually have to pay extra either for the service or for additional infrastructure needed to provide you that service, or you simply get everything throttled to sustainable level (think water, electricity, etc). Most traders would not see any difference and the ones that would will have to think twice before DoSing.


>They could put tiered charging and above a certain volume of transactions apply an additional fee (one time or per transaction). This is a model used in many other sectors.

It's actually quite common for exchanges to offer discounts/rebates to firms that trade more, because these firms are essentially providing a service to the exchange (market making, and generating turnover). Much like how other industries offer discounts for buying in bulk.

>The other option is to rate limit the biggest offenders when the system's performance limit is reached.

I don't know why people always suggest solutions like this when it comes to exchanges. If it was an e-commerce service and somebody suggested "let's rate limit customers to reduce load", it's be shot down as a lazy solution (and a great way to lose customers). If a platform isn't good enough to support the load its customers place on it, then it needs to be improved. If AWS goes down during the world's biggest shopping day, we don't blame the customers, we blame Amazon.


> It's actually quite common for exchanges to offer discounts/rebates to firms that trade more

I reworded a bit to give a better idea of what I meant. In general there are bulk discounts but there's always some form of "QoS". Either you get throttled above a ceiling, or you pay through the nose to have that ceiling higher for yourself and have everyone else throttled.

All tiered systems work like this. The fixed price goes up and gives you access to better service, higher limits, lower price per usage, etc. As long as you don't have virtually unlimited capacity you can't treat your system as if you do.

Mobile operators have to do the same. They charge a base contract price that sets the tier, then charge per usage within that tier, give different ceilings, QoS priorities, and throttling rates. And no matter how big you are as a customer at some point you either pay a surcharge or get throttled so the system stays up and other customers are also served. (source: I worked both for a large telco, and for a customer that payed ~3million E per month to get the consumption costs for min/MB down to almost 0).

> I don't know why people always suggest solutions like this when it comes to exchanges.

By any chance do the suggestions always come after the exchange was down due to high transaction volumes? Sure, they could implement unlimited capacity. But since this is a tad unrealistic and actually crashing the system is worse than limiting to keep it just below crashing, perhaps the suggestion makes sense.

There aren't many sectors that I can think of which give you "unlimited usage no matter what". There's "rate limiting" even in hospitals, where lives are at stake.


>By any chance does the suggestion come after the exchange was down due to the high volume? Sure, they could have unlimited capacity. But since this is a tad unrealistic and actually crashing the system is worse than limiting to keep it just below crashing, perhaps the suggestion makes sense.

I don't see a problem with rate limiting in general, if it's applied fairly. Good design should already entail that (it shouldn't be possible for customers to "crash" an exchange any more than it's possible for them to crash google.com). And many exchanges in fact have something like you described (customers can pay to buy more transactions-per-second capacity). But I don't think referring to customers as abusers or applying punitive fines is the right approach; if the exchange API let the customers crash it, that's the exchange's problem, and the customers shouldn't be blamed for taking advantage of whatever rate limit the exchange gave them (even if it gave them too much for its system to handle).


FWIW, eurex and xetra do have rate limits and will throttle and disconnect abusers.

There are also limits on average order to trade ratios (this is required by MIFID2 IIRC).


You’re right - the requirement on order/trade ratios is more a policy requirement, but venues are required to have limits to control excessive message rates.


Lots of exchanges have rate limits and messaging fees. And even if they don't, if you cause problems they'll shut you off.


To be fair, executions ("customer purchases") are relatively rare events. If we want to keep with the analogy e-commerce, you can think of orders/cancels as merchants repricing their wares and changing their on-offer quotas hundreds of times per second. Each. As a marketplace owner you probably would want to at least rate-limit that.

In a physical store you wouldn't get to the shelves from the throng of clerks running around with sticker guns.

> If AWS goes down during the world's biggest shopping day, we don't blame the customers, we blame Amazon.

Yes. We do. But the customers blame us. It may not be our fault, but it's still our problem.


Do you know how Amazon stays up during Black Friday? Rate limiting.


It could be a solution if markets agreed to the same rules. To me, this form of trading has questionable economic effects in my opinion.


>To me, this form of trading has questionable economic effects in my opinion.

Have you ever actually researched the economic effects, or is that just a gut feeling?

If you look at it historically, as machines have come to dominate market making, spreads (the difference between buy and sell price) have continually fallen, meaning institutional investors and retailers can buy what they want at a better price. Machines are able to offer these better deals because they know they can get out of their position faster if the market suddenly moves; removing their ability to react faster would remove their ability to offer better prices.


The part about spreads being smaller because machines are making markets is true, but there are important caveats. The one that comes to mind first is that the tight market is only there for small quantities. If you want to trade in size, then you are out of luck.

Machine based market making tends to work well when markets are operating "normally". When some regime-changing news comes out, it's not uncommon for the over-fit algorithms to perform badly so the managers just turn them off. I.e. liquidity disappears just when it's needed most.

https://www.wsj.com/articles/thinning-liquidity-in-key-futur...


well, yes, if you have a market moving trade, you'll have to pay a premium for it to be executed. MM are not there to give money away.

Similarly, if your house is on fire, it is hard to complain that buyers go away until they can evaluate how much the hashes are worth.

Market makers mostly provide a service for retail investors.


To me that sounds like customers are getting a slight discount on everyday transactions, but getting creamed the market suddenly moves faster


I actually did read about it. Wikipedia alone can enlighten you about respective controversies. I think critical voices sound more reasonable but it is an open question at least. It is still my opinion as stated.

Show me the produce, some say it is liquidity, and I withdraw my criticism.


I don't see any controversies mentioned on https://en.wikipedia.org/wiki/Market_maker. https://en.wikipedia.org/wiki/High-frequency_trading mentions some controversies about high-frequency trading in general, but none specifically about market making.


That is why I call that gambling and not investing.


Why do you call it gambling? The market maker doesn't want to take a position on the stock, that's why he tries to get out quickly if it starts moving. He just wants to provide liquidity (be willing to buy and sell to anyone at any time, so that buyers don't need to wait for a matching seller to come along), and collect a small fee for that in the form of a spread.


But who are the customers?

Why do investors need orders filled instantly?

It seems the only customers of this service are day-trading troublemakers.


A minute? Have you ever traded before? Prices move so fast that this would destroy market makers and so on.

Sometimes I cancel 4-5 orders before getting a fill on an option trade, manually.


Is it even a legitimate problem? Exchanges are well-equipped to deal with this kind of thing and direct access to exchanges is moderated by broker-dealers. As a normal user (algo trader or otherwise), you cannot directly plug into the exchange and spam orders. And anyways, exchanges (even IEX) _want_ the arbitrage and liquidity that market makers offer.


Or make taxes on gains start at 100% and exponentially decay as you hold a certain instrument longer. If the final tax is 30%, that could be reached e.g. after a week.


Should that rule also apply to retailers of other products, so they pay 100% tax if they turn over wholesale products too quickly? Why or why not?


I fully agree with you. I have no idea why you are down-voted so heavily.


> I fully agree with you. I have no idea why you are down-voted so heavily.

Maybe because they claimed it's easy to solve when the solution they proposed in fact has a whole lot of undesirable consequences that are obvious to anyone more familiar with how markets work.


Sending out orders and withdrawing them immediately after is market abuse, there is a regulation in Europe against it (MIFID II): the real question about the "whole lot of undesirable consequences" is, to whom? and who would benefit instead?

The truth is that the real effects of algo trading are still poorly understood as a system, and they have been able to crash markets a couple of time in the past, for no reason. Do we really need this form of speculation "because liquidity"?


>sending out orders and withdrawing them immediately after is market abuse

It's abuse if it's done for the purpose of manipulating the market (spoofing) or deliberately to slow it (quote stuffing), but not if it just happens because somebody put in a bunch of orders and then realised their mistake.

>"a whole lot of undesirable consequences" is, to whom

To all the institutional and retail traders on the exchange, who have to pay worse prices for their trades because the market makers can't quote so tightly.


> but not if it just happens because somebody put in a bunch of orders and then realised their mistake.

Surely that would be an occasion thing though, not several times a second for a prolonged time?

I can see how throttling of some kind could work, as long as it's not too aggressive.


Market makers routinely have to adjust their quotes as the price moves. If they set a price and then the market moves significantly, the price will soon be incorrect. If they leave the order on the book then they might get undesirable fills (losing money) or it might just sit there and never get executed because their price is worse than what the market is offering. In either case, it's useful for them to be able to adjust their quotes by cancelling previous quotes that are not up to date with the latest information.

Imagine if every time you go to the gas station and see the big price for a gallon you write it down. Should the gas station still have to offer you the lowest price over the past {day,week,month} if the price has gone up? "Cancelling" orders is this same idea of changing the offer, only at a much smaller time scale because markets trade extremely fast.


It’s not abuse if you’re trying to get your order filled. I make and immediately cancel orders frequently, simply to get a fill. Derivative prices move fast


GB never joined the Eurozone. So what does Target2 have to do with any of that mentioned in the article?


Yes it's expensive in some ways and in others it isn't. Can you get a better synth for the money? Sure. Can you get a better sampler for the money? Sure. Can you get a better sequencer for the money? Sure. Can you get a decent synth, sequencer, and sampler for the money? Well, not necessarily. I'm not saying the price is entirely justified. You certainly pay a lot for the "brand". But on the other hand it might not be as overpriced as one might think by intuition.

Plus, as a owner, I hope price will only go further up, or at least stay on that level :)


I own an OP-1 and a mess of other synths. The Deluge comes close. I love my OP-1 for its size and portability. Being able to hack up a song on an airplane exclusively on one device rules.


The Deluge is excellent and I like that they really try to prevent people from having to menu dive. But sometimes when I can't remember what the 3 key combo is, I really wish it has a usable screen so I can look at a menu.


> Plus, as a owner, I hope price will only go further up, or at least stay on that level :)

How does a high price benefit you as you own it already? Seems a bit egoistic. Maybe you're planning to sell it? Then I would understand your sentiment, but otherwise, seems a bit strange.


It's nice to buy an instrument that ages well. A lot of synths do not hold their value once they leave the store shelf. The OP-1 managed to become something of a classic in a short period of time. If you're like me, you want to buy things that hold their value, so you can trade out gear without leaving a bunch of money on the table.

The price hike on TE's side doesn't really have anything to do with this, though. It's likely due to having to source old parts or finding new parts that are more expensive, and adjusting the design to accommodate.


I think when it became part of MoMA's permanent collection it really solidified a special place for itself.


Exactly! You worded it better than I did :)


>How does a high price benefit you

Reassurance that the company is actually trying to be sustainable.

All to often niche physical products get it into their head that they can break a mass market, hire sales people to get them into retail which all starts the cycle of lowering the price, compromising the vision then they wake up a year later, their product completely devalued, it's gathering dust on store shelves at best or being sold off a fraction of the price at worst and your customer base hasn't grown.

Happened to Roli in the same space.


Yeah, that makes sense. But then Rockslide should say "I hope they keep and don't compromise on their vision" rather than "I hope the prices stay the same or gets higher".


Spending money doesn't "hurt" as much when the goods you spent it on don't lose value, that's all ¯\_(ツ)_/¯ I might regret having bought it if you can get it for 200 bucks in 5 years. I will not regret it if you can sell it used for the same price as today.


If everyone uses it, it will sound too generic to listeners. That's why mainstream artists hire sound designers to get a never heard before sound for their synth.


You're overestimating the power of the hardware and underestimating the value of creative patching when it comes to synths. As long as you get control over the patching, you'll be able to create "unique" sounds as much as you want.


Isn't this how collecting cars or investing works? I mean it is YCombinator here lol.


> Can you get a decent synth, sequencer, and sampler for the money? Well, not necessarily.

Not even close to true. Deluge, MPC Live, AKAI Force are all waaay more capable and cost about the same new as a used OP-1.


Is that true though?

The Deluge currently appears to not be buyable anywhere here, and if I bought it directly from them in NZ and add import tax to that, it probably won't be that much cheaper than a OP-1, if at all. And then the Deluge is at its core a step sequencer, so you can't even record unquantized. Might not be important to you, might be the killing missing feature for someone else. But never having owned a Deluge I can't say much more about it.

The AKAI Force is not that much cheaper than an OP-1 here. Is it "way more capable"? I don't know. But it doesn't even have a piano-like keyboard input, so again, at least one very distinctive feature it doesn't have compared to the OP-1. And then look at it. Would you want to take your AKAI Force on your commute to jam? I wouldn't.

The only one that really is significantly cheaper is the MPC Live, which I don't know, so can't really judge. But this seems to be a sampler at its core, so not sure about its synth capabilities? And it also doesn't have a keyboard...

Edit: well, I stand corrected - the Akai MPC Live II costs the same as on OP-1.


Ah, but can you power all of those from a battery pack and keep them on your lap on the train after taking them out of your small backpack? And all at the same time!


With the Live, you can. It's both much more capable and costs half as much.

However, the OP-1 is probably easier to handle because it's smaller.


MPC Live has an internal battery


> Can you get a decent synth, sequencer, and sampler for the money? Well, not necessarily.

If you are ok with transfering samples to the device via a web interface, the Novation Circuit is an absolute killer answer to this question.

Sure, the OP-1 is a gem, no doubt about that, and I would love to have one. But for the price the Circuit is a really really nice, unsable and accessible device.


+1 for the Novation Circuit, super portable and really powerful for the price. Compared to the OP-1, I'm sure it's nowhere near as powerful, but it's a really nice little production tool you can bring you with everywhere, just like the OP-1. I'd say the sequencing is a bit easier on the Circuit as well, compared to OP-1, but the synths and sound in general is better on the OP-1.


Reading that stuff always makes me feel incredibly stupid (or rather clueless). I wouldn't be able to answer most of those questions off the top of my head. Including this weird clock hand question (which is probably some trivial math that might or might not have been taught to me more than a decade ago and which I never ever actually applied since).

Then I probably have to remind myself that I'm (halfway successfully) running a SaaS platform with tens of thousands of users, so maybe I'm not that clueless after all.

Maybe Google scale isn't for me.


Using google products has made me certain that being able to solve math/algorithmic problems and being able to make good software really aren't connected in most cases.


It's funny, because I seem to remember the original Google homepage, Google Maps, and Gmail all being pretty good.

Then, somehow, little annoyances crept in and accumulated over time.

And now everything they make is like treacle.


Yep! To be fair, Google software is usually technically very good (e.g. their security is second to none), but they have no product vision, and often very bad UX/DX.


Google products and Google libraries. Tensorflow, Angular, GWT, the Android development experience....


Clock hand question is painfully obvious once you look at actual analog clock face. All of the questions were easy bordering on trivial as long as your background lines up with target position. The problem is not the questions, but the culture of gotcha interviewing. Haha we got you doing off by one by offering the C option on whiteboard, haha you didnt initialize one variable on whiteboard, haha I have bad mood therefore you fail, etc.

>I recently came across your name as a _possible world class Engineer_

That right here is the cancer.


> Clock hand question is painfully obvious once you look at actual analog clock face.

You see, intuitively I thought this would involve some kind of radian maths (just because it's about angles and a circle) and I don't remember any of that - never had any use for it.

But yes, it is a lot more easy than that. Not sure I would have discovered the trivial approach in an interview situation when it didn't even occur to me at home on the couch.


It's established that the Google-style interview questions bear no semblance to real world performance or software engineering skills. All they test is the ability to memorize leetcode style questions and regurgitate them during interviews.


Is this established? What I've heard is that while it bears no relation to the actual job, it does correlate with real world performance at that job.

I've only heard this anecdotally, but it is their best hiring indicator which is why they persist with it, despite it being very expensive.


I think you would find a better correlation of interview-job performance if you looked at the interviewers involved in the hiring process


These interview are notoriously hard to pass. Less than 5% gets in. You should not feel bad about it. Being an SRE at Amazon or Google scale require that you know what is going on in every layer of the infrastructure, including data structures in code, so that you can pinpoint the exact problem in the stack. I think some of these question are overkill and have nothing to do with how good you are going to be at the job though. Writing a binary tree certainly one of those. You are not going to write one because there are tons of libraries in every language that is in production. Knowing that DNS can use TCP is much more valuable. This falls into the category of questions that you are likely to run into when debugging an issue.

If you want to get a job at Google you have be willing to be put up with these questions (both the ones make sense and the ones do not) just to show your dedication. I usually prep for such interviews roughly 3 months and I have worked as an SRE for a long time. Do not assume that we know all of these things on the top of our heads and do not feel stupid.


The purpose of these interviews is to:

1. Find people who fit the current culture

2. For the interviewer to feel better about already having a job at Google. I remember one guy bragging about how much he made.

It's really not to find out if you're good at something. I have been offered jobs at some of the FAANMG companies so I'm not talking out of jealousy or whatever.


One thing that struck me is that the interviewers are asking the same questions to many different candidates, so they are going to know all the slip-ups that people make.


Many people who work at Google feel the same. The impostor syndrome is a recurring motive in in the internal communications.


Well, when you work at Google, at least you must have passed that kind of an interview at one point - which is something I'm fairly certain I wouldn't be able to do.


One of Google's HR groups did a test where they gave the hiring review team their own interview packets (anonymized, from when each team member originally interviewed). None of them were hired.


I can say with 100% certainty that my coworkers ask questions in interviews that they wouldn't be able to answer themselves. It's kind of ridiculous IMO and I try to call it out when I see it (in the nicest possible way of course)


The issue with impostor syndrome is that your brain is good at finding excuses such as "I was lucky", "They did not catch me", and so on.

Personally, I had a lot of fun doing my interviews and learnt a lot while preparing them. I would recommend you to try them, it's gonna be interesting, even if you don't get the job.


I have done interview training twice in my 8.5 years at Google, and both times it only made my impostor syndrome far far worse. I simply can't do interviews here, I wouldn't pass the ones I had to give.

It doesn't help I came in as an aqui-hire.


Google can afford a lot of false negatives because many people want to work there (well, here, as I work there).

What makes no sense is small companies copying this process and then griping about finding talent.


The clock should be simple -- the hour hand is a quarter of its way to 4 o'clock, so: 360*((1/12)/4) = 7.5 degrees.


Is the trick that some people think it’s zero or something?


You need to remember that the hour hand slightly moves.


Clearly the interviewing Googler with a 250k salary doesn't value a decent wrist watch where whole minutes and hours tick over nicely in a single movement. ;-)


> Clearly the interviewing Googler with a 250k salary doesn't value a decent wrist watch where whole minutes and hours tick over nicely in a single movement. ;-)

I've never seen a watch where the hour ticks over in a single movement. Do those exist? I imagine it'd be very confusing to see the minute hand at 59 and the hour hand still at 0100 when it's 0159.


First you need to know/remember how analog clocks looked like back in the day.


I have a great video of my kids trying to figure out what a pay phone is back in ~2014. Not rotary, but a similar sort of "wow, I'm old" sort of experience.


Back in the day?

https://www.amazon.com/s?k=analog+clock

Though I was surprised the autocomplete suggests "analog clock for kids learning"


Actually I (never took an algorithms class, so I would definitely have to brush up on execution of the basics there) thought this questions were not that bad, the newer reports seem much harder (or are plain exaggeration by the "leet"code community).

I'm also a bit puzzled about the trick question being mentioned at that. That's just primitibe (maybe I'm a little bit burned by that as in 5th grade or so, our teacher did a test, writing times on a sheet and making us calculate the angle between clock hands (and it was a lot of numbers on that sheet))


I was good before but i did excersize for google interview for 2-3 month.

I actually overdid the learning as i assumed that the coding questions can't be that easy.

But at the end of the day, i'm still not working for google because yes the coding questions are not that hard but you have to convince 5 different people that you are a good fit.

Suddenly you excersize for learning how to know what the interviewer wants from you.

Is that wasted time? Definitly not. Loved the more focus learning approach.

Also, i'm now 10 years working in that industry, you don't stop looking for a new job. That interviewing skill is critical.


I feel the same way, except I think the clock-hand question is quite straightforward. Except for that one, just like every other time this sort of thing is posted almost every question is either:

1. About fairly low-level details of computer networking, or

2. What would you do if your data doesn’t fit in memory?

I’ve been a successful developer for years and never had to deal with either of those problems.


It's been the same experience for me. I've been successful as a developer and yet have never encountered many of these problems that interviews in the past demanded I know.

Ultimately if I end up encountering a problem like that in my career, the honest answer is to pick up a book or begin reading through articles on solutions. Sadly that kind of answer seems to be frowned upon because it's better for you to lie about what you know than to admit you don't know the solution and figure out ways to rectify it.


> which is probably some trivial math...

The clock hand problem can be solved with the Rule of Three

https://en.wikipedia.org/wiki/Cross-multiplication#Rule_of_T...


> Maybe Google scale isn't for me.

Google employs the very elite of the elite. If you're working there you're probably at the very top of your field. Obviously not every can meet that standard and there's no shame in not being the absolute best.


Google employs the very elite of the elite.

I feel a little sorry for Google's developers. They're told they're brilliant but asked to write software to deliver adverts and track people. They could literally be curing cancer and putting people on Mars if they wanted, and yet they sit in Mountain View working out new things to do with the data about what I clicked on today. It's a huge waste of talent.


People who dedicate their lives to curing cancer and putting people on Mars are titans for making verrry verrry wee little dents in those problem areas.

Your typical software engineer slept through the minimum amount of biology coursework required of them and has no special interest in chemistry beyond maybe like, a broscience-level grasp of pharmacology related to research compounds commonly viewed as smart drugs.

If bay area futurism woo is indicative of broader trends among software engineers, this person also has a vague, half-ironic commitment either to superintelligent AI immanentizing the eschaton or brain uploading becoming a hot-swappable replacement for the mortal coil within 20 years. Or both.

A belief that Google's workforce constitutes an addressable vat of brainpower of such raw g that, were it not for the idle concerns of developing adtech, it could turn to curing cancer is a very strange belief.


I'm not sure what the intention of the OP was but I read it as those engineers could help push those efforts forward by doing what they do best -- writing software. Not all of those involved in the effort to cure cancer are studying chemicals and making drugs. There's also people who have to run some matrix calculations with MATLAB code, or who have to transcribe experiments into excel sheets. It's hard to think that if we took the pool of talent that makes the information-collecting products that Google makes, it wouldn't have any difference in the amount of progress made



A lot of cancer research is about optimizing computation, which is what a lot of Google engineers do. I didn't pick the examples flippantly.


To say that is a stretch is an understatement.

I'll explain why it sounds preposterous to me, and why the idea that google could change the fate of cancer research or missions to mars if not for its profit motives is patently unrealistic to the point of being insulting to everyone involved in this counterfactual.

1. Why cancer?

Number one cause of death is heart disease. Why not that? Surely the #1 problem is good enough for people currently working on adtech if the #2 problem is.

You said curing cancer, not "applying the numerical optimization expertise of a tiny fraction of google's workforce to the occasional omics problem in collaboration with a research lab" or "headhunting the PI of a lifesci research program to build an initiative at Google" or "throwing google infra at something other than Stadia". Even something far more obvious like "applying deep learning to diagnostics / early detection / etc."

These are the things Google can actually do, and they are the things Google already does. Examples: DeepMind for breast cancer screenings, Public Datasets program support for selected research. Still no cancer cure. So if you'd said that, well, they're doing it, to whatever extent they deem worth the prestige bump.

Unless they decide to become Microsoft and buy out entire sectors of applied biotech, they're not going to radically improve their hands-on involvement in lifesci relative to what they're already doing.

More, just ballpark the numbers. Google has 100k employees — "98771" in 2018-12 as per their SERP infobox. "A lot" of those employees absolutely do not by any stretch of the imagination possess the skillsets required to assist in meaningful ways on high-end cancer research, unless that work is conducted like some kind of wartime effort and the assistance is largely of a clerical nature.

And in the event that Google employees were being unilaterally drafted into this effort as grunts, the very first thing they'd do would be to crowdsource/outsource the bulk of the menial work to cheaper labor sources beyond Google. So their main contribution would undoubtedly be in building the scaffolding required to pass the buck.

2. Why Mars?

This constitutes a complete diversion of Google's autonomous driving and robotics people to an out of the blue project that they're out of position to handle. It would merely sabotage a division where Google is a market leader for the sake of starting up an incoherent, laggard mess in an area defined by maddening bottlenecks and complexity. Google has no substantial preexisting expertise in this area, afaik.

At one point they put up 30M USD in sponsorship prize money for a lunar lander competition, but that's about all that comes to mind. It also went pretty dismally. They had to repeatedly extend the deadline. Eventually the comp ran 11 years and ended in one launch -> one crash on the lunar surface. They threw a mil at the team as a consolation prize.

That project was more of an exercise in reifying the moonshot metaphor than anything else.

One far more recent indicator of how hard it is to make strides in this area is the DARPA Launch Challenge, intended to accelerate radical improvements to launch pipelines. It just closed without a winner. One team barely made it to the launchpad then scrubbed.

In terms of both social good that Google can do with its nearest preexisting R&D and profitability, it's clear that redirection would be disastrous. Autonomous vehicles are the play for them. This has nothing to do with the adtech people either way.

All in all, the relevance of Google to either of these problem areas is about what would be expected, cet par: not a lot. Modern work with any technical bent is hyper-specialized. There's no jump from literal cancer like adtech to actual cancer.

On top of that, adtech is largely a house of cards. You can't ever assume that 'data science' coupled to business requirements and marketing is in any way rigorous or indicative of the current state of the art in independent research. A lot of this work is pure smoke and mirrors operating on greater fool theory. To specialize in that sector is to specialize in window-dressing falsity. So it's not like you'd want adtech data jockeys touching subject matter that actually matters.


They are working on things that interest them. What is wrong with that? That is also a little insulting to the researchers that are actually studying how to cure cancer.


> Google employs the very elite of the elite

I needed that laugh, thank you!


Google has at least 50K employees, spread all over the world. I guess it is quite possible to gather and recruit 50K employees who are much better developers than most others outside but I would not (rather cannot) call each one of them elite of the elite !

Look, a great deal of valuable software has been created by Google and other FAANGs. But I reckon most of that value has actually been created by the first or the best 5000 or maybe 10K employees at most. Most of the others in Google are enjoying the fruits of their genius, and they don't really have to perform any greater miracle than moving protocol buffers around. Still, no doubt that they are highly competent software developers and passed tough interviews. Not to imply that Google interviews can even distinguish between a genuinely talented programmer from a diligently-practicing-over-a-year leetcoder.


> I guess it is quite possible to gather and recruit 50K employees who are much better developers than most others outside but I would not (rather cannot) call each one of them elite of the elite !

There are other companies besides Google. Who's to say, for starters, that any of the engineers at the members of FAAN, or Microsoft, or the unicorns, are any less elite?


I mean, they definitely do. They also deploy an army of muppets! (6-yr ex-muppet here)


> Obviously not every can meet that standard and there's no shame in not being the absolute best.

There's an interesting implication in your statement. The inherent assumption is that the metrics which Google uses to hire interviews would be the definition of the best. In other words, if you'd use Google's interview process in the whole world - the filter of that would lead you to the "best of the best" and "not so the best".

I don't know the answer to it but you may want to consider this aspect before reaching the conclusion.


> Google employs the very elite of the elite

Do all people with a @google.com email address keep telling themselves that?


No. We laugh at this stuff. All sorts of internal memes about "hiring the best engineers in the world to move protobufs around."

There are definitely personalities around the company who think of themselves this way, but certainly not the majority. I think this kind of language in fact hurts us, because it creates a pronounced impostor syndrome around us, as we all _know_ we're mere mortals.

The reality of a SWE job at any BigCorp is quite mundane, and is more politics than coding most of the time.

Honestly our interview process should be more testing people how to write and comment on design docs and fill out a performance review, because that's what they seem to want the most out of us.

EDIT: Oh, also filtering through 2 pages of new emails, mostly automated, to find the two things that are relevant to your job.


I’ve never worked at Google. Have you done one of their interviews?

How do you think the world leading research results that they do without world leading people?


I have no doubts that _some_ people working at Google are top of their field. But that hardly applies to all of their workforce...


You might want to look up "Joy's Law".


Is this still true? I know they strove for this 15 years ago, with much success. From what I've seen in the past decade, I highly doubt that this is still true. And I'm not speaking of Google's products, but other aspects like their culture, the opinions of academics, media reports, etc.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: