Hacker Newsnew | past | comments | ask | show | jobs | submit | more twelfthnight's commentslogin

Seems more plausible to me that young people are just realizing they will be inheriting the consequences of decades of selfish leadership and they are reasonably worried about the future.

The narrative that social media is the cause and not the obvious looming climate change, war and wealth inequality is frustrating.


Maybe. But also I've become a massive history nerd and I can't name a single period in the modern world without serious objective problems. There were wars and diseases and nasty politics and crises and injustices and environmental catastrophes aplenty. Our current slate is serious business, but not uniquely so.

Based on the evidence available to us, what the little doomscrolling box has done is drive us crazy and actually make it harder for societies to act on these problems.


In the 1960s the teenagers of the day (who are the people who have been in charge for the last 10-20 years) were inheriting the very real threat of global nuclear war, something that those born after about 1980 didn't need to worry about as teenagers, and it's only really the last few years that it's cropped back up again as a possibility, but it's still nowhere near the level of the 60s.


The world had much bigger and more imminent problems in the past. People didn't dwell on them collectively in the same way.


I don't mean to be glib, but you are in denial. Nuclear war is more likely now than since the early cold war, China is ready to invade Taiwan, climate change is poised to weaken or complete disrupt the Gulf Stream, America is months away from electing an insurrectionist as president, buying a house is a pipe dream for most Americans...

How could you see that as a young person and not feel at least a little hopeless?


We are talking about data that spans over a decade, not a year.

Look at figure 1: https://www.whitehouse.gov/cea/written-materials/2022/05/31/...


I'm being admittedly a little polemical, personally I'm freaked out by the future. But that aside, I'm curious to hear more about your argument. Are you saying worry about the future can't be causing this issue because scary stuff only started recently?


Some of the examples you gave are too recent to explain the rise in depression that has been going on for over a decade.

And climate change worry has been going on for much longer.


Those are good points. I would have thought that the trend would started around 2015/2016 if global crises were the main cause. There is a small jump in 2016 for 18-25, but, I don't want to read too much into it.


As the other posters have alluded, read some history, and you'll find parallels everywhere. It's serious, but it's often serious. Previous generations managed to buck up and meet the challenge; what changed? Subjective mindset, not objective situation.


I think the biggest problems come from single points of failure. This wouldn't be as problematic if _everyone_ didn't use the same libraries. I guess you could argue that's similar to trust.

In any case, I agree technologist aren't thinking about it because the only way to extract massive amount of wealth is by centralization/monopoly. Hard problem though, the appropriate amount of redundancy is hard to know.


Haven't heard this argument before. But from the Wikipedia article it seems base 3 has the best asymptomatic radix economy, but isn't much better than base 2 and base 2 is seemingly easier to program and optimize.

Since this is a new argument I've not heard, would be curious if you had links or some more explanation.


Maybe evolutionary algorithms instead? Hasn't proven super useful historically, but maybe at the scale of enormous LLMs it will be?


Nope, they're orders of magnitude more inefficient because they don't leverage gradient descent.

Rule of thumb in optimization: real numbers are easy, integers are hard


This may be the status quo because of the so called "hardware lottery" which has historically been optimized for floating point. I'm speculating, but if hardware designers were instead only concerned about raw xnor density and throughput, we might end up with chips powerful enough that giant 1-bit nets could be trained purely through evolution.


No, it's a fact at the mathematical level that you can enshrine in big O terms if you want to


How do you optimize memory for floating point?


BF8 and other similar formats?


Evolutionary algorithms made you, didn’t they?


That does not prove that they can beat gradient decent.


It took a lot of human brain flops to get to this point in time though, I wonder how many orders of magnitude more than it took to train ChatGPT...


Gradient-directed evolutionary algorithm sounds kinda interesting.


So, marketing is inevitable and necessary, but I have a hypothesis that the current Internet is making it worse. For example, creators (I'm lumping in researchers with songwriters, actors, etc) used to focus on passing the hurdle of getting an "elite" power (record company, publisher, University) to support them. Once over that hurdle, they specialized in creating and left marketing to the elite.

The elites would pressure the creators to do things they thought were marketable, but it didn't always work because creators had some leverage in negotiation and a small number of elites actually cared about making good stuff.

Now, there are fewer gatekeepers, but instead there is an all powerful algorithm. Creators all have to do their own marketing in addition to creating, and the algorithm can't be negotiated with.

So what we wind up with is insipid YouTube thumbnails and myriad academic papers with breathless "state of the art" claims.

There are tradeoffs, but I do think it's worth noticing how effectively we've started to reward creators for marketing rather than creating.


> Ideally the tricks I’m suggesting here will be almost invisible, affecting readers in a subliminal way

Why would I want a math paper to be subliminally manipulating me? I feel like everyone has been watching too much YouTube/tiktok and is buying into the notion that clickbait isn't just a vicious feedback cycle destroying everyone's integrity.


Everything is always subliminally affecting you. It might as well do it in a helpful way.


> everything is always subliminally affecting you

Right, but certain methods are more effective than others. This paper is arguing and encouraging exactly how to manipulate more effectively.

> It might as well do it in a helpful way

Being more effective in teaching I agree is a good thing. But a math paper isn't for teaching, it's for showing a proof or making an argument. I just think we ought to set standards on academic research to remain as neutral as possible to let ideas flourish on merit rather than cunning tricks.

EDIT: I get that a career in academia requires all these games to get more citations. Looks at ML research, I feel like abstracts are written by used car salesman nowadays. So like, if you have to do it do it. But we ought to call it out from time to time.


I think it was the word "subliminal" that made you think of manipulation.

On the other hand, I read the quote you posted as meeting the reader at their level and guiding them to a clearer, deeper understanding by providing information in a logical, intuitive way. This would mean, for example, providing real-world context for each abstract concept introduced, rather than just leaving the concept by itself together with an abstract definition.


I mean.. It's no different than a story being written better instead of worse. The dry paper is just worse in every way, at both the author's goals and your goals.


> This may require “watering down” the results being described — stating corollaries or special cases instead of the full theorems in their maximal generality. Sometimes you may even need to leave out technical conditions required for the results to really be true.

This is a trade off, don't you think? Without the marketing, the paper would be more complete and correct.


No, your exercpt is not complete and correct.

"So, the introduction to a math paper should set the scene as simply as possible"

The introduction is not the whole paper.


If I have a salad and I water down just the dressing the whole salad is still worse, no? Unless you are saying the introduction isn't important, in which case why would you need to change it at all to make the paper more engaging?


In that odd metaphor the dressing started disgusting and you're watering it down in order to make it palatable.


No, you skipped the context, which says that in the introduction you talk about special cases to build intuition, and then provide the general result in the technical part.


Huh, I guess I missed the part about providing the general result in the technical part (still don't see it, but that makes sense), ultimately that does seem like a good idea. At least I've heard we understand better from examples first and then generalities.

My (admittedly grumpy) gripe is more that the aim of the blog is that "dull" is bad and suggests to add/subtract candid mathematics with "heroes" and "conflict". If the paper isn't _clear_, that's one thing, but "trick"ing the reader to spend more time on your article than they would without the embellishment is patronizing at best and disingenuous at worst.


What choice can you point to that has increased Google's market cap? Google's search quality is deteriorating, Waymo is yet to make a profit, GCP is struggling, "Attention is all you need" was too early on to credit Sundar, and since then Google's reputation is flagging due to embarrassing AI models...

I think Google's market cap is increase _despite_ Sundar, not because. That said, Google does have enough resources to turn it around, I just don't think Sundar is the right person to do it.


GCP is struggling? Is that an opinion based on experience or based on actual stats? Because GCP revenue has increased every year since 2017. AWS and Azure are capturing more market share. But I’m not sure I consider third place and $33B in revenue last year a struggle.[1]

> I just don't think Sundar is the right person to do it

That was sort of my point. Does Google need to be turned around? All stock metrics and revenue metrics show that they are doing well as a company.

Sure the AI model stuff was embarrassing. But it doesn’t seem to be having an impact on the value of the company. Maybe goodwill was hurt. But if we’ve learned anything from the Meta drama over the years, people will be quick to forget about it. I don’t think a few fixable and public missteps like that will sink the company. Does anyone outside of tech even know about it? It’s possible it’s indicative of a larger internal issue thats brewing. But it’s not impacting the value of the company… yet at least.

[1] https://www.statista.com/statistics/478176/google-public-clo...


Google Cloud Platform's growth is slowing [1]. It's finally profitable, but I don't think distant third place is what Google was looking for after 10 years of pouring billions of losses into it.

At the time of this news, Google's stock fell 5 percent, so it is hurting company evaluation, at least temporarily.

Google's stock might be rising, but what is causing that? I can't point to any material success of Google in the last 10 years to explain it, seems to me Google is coasting.

https://www.reuters.com/technology/google-parent-alphabet-re...


Google is in the familiar position where there numbers are good, the charts are all pretty, but when you go put your ear to the ground, you only hear sounds of trouble.

If google had proper leadership, the company would easily be worth twice it's current value. Easily. Instead we have a situation where raw capitalist inertia is carrying the company forward, while active discussions of the company are ridden with grievances and frustrations. Grievances and frustrations in a market where there are competitors that users can flee to. It's a bad spot to be in.

There is no reason Google shouldn't be the ones on the cusp of releasing GPT-5 level LLMs. None. Instead however they have a middling LLM that is scared to mention white people. So back to the drawing board so they can work out the racial kinks, while the competition blasts past them.

Google needs big company technology focused leadership 5 years ago. But tomorrow would be good too.


You make a good point about the possibility that they could be even more successful with stronger leadership and product focus. I can’t argue with that and don’t disagree.

My points were focused on the fact that the data just doesn’t currently show Google failing or declining as a company.

It’s going to be really interesting to see how the Google AI strategy plays out. I agree that they could have absolutely been the leader. They had the money, resources, and ingredients to make it happen.

I believe that AI is a threat to their current business model. How much did that influence their investment and focus on it?


Imagine having the talent and money of Google and accomplishing essentially nothing in ten years, but collecting 200 million a year.


I hear "revolutionary" claimed about languages like Rust, Haskell and Zig, but rarely about Go. What about Golang is revolutionary to you?


There are several features that can be considered neat on their own. Most of those features are probably derived from other languages, but together they form a very powerful language that simply takes away the pain that I feel using other modern languages, such as C++, Python, Java and NodeJS.

Here are a few on top of my head:

1) CSP concepts embedded deeply into the language (goroutines/channels/select) making concurrency easy to do correctly

2) Standard Library and Go toolchain providing everything that most languages use third party libraries for (formatting, testing, benchmarking, fuzzing, HTTP, crypto, etc...)

3) Compilation into a static binary that can just be copied from machine to machine without any dependencies whatsoever (even C struggles with that on Linux, due to glibc NSS fiasco)

4) Cross-compilation by changing two environment variables

5) Minimalistic distribution system - just write `import "github.com/person/repository"` - no need for packaging, pom.xml, requirements.txt, package.json, etc.

6) Interface-based modularity (structural typing), making code reuse much easier than the usual OOP-style abstract-class based modularity (nominal typing)

7) Extremely fast compilation, which makes read-modify-run development loop as fast as with interpreted languages


1) Modula-2, Active Oberon, Erlang

2) .NET, Java, Smalltalk, Common Lisp

3) Any compiled language until the mid-1990's.

4) Amsterdam Compilers Toolkit, 1980

5) Until the repo changes, forbids distribution of binary libraries

6) Standard ML, Caml Light, OCaml, Haskell,...

7) Turbo Pascal on CP/M, MS-DOS computers running at 7 MHz, with 640KB.


If your post is intended to be a remark on how nothing in Go is "revolutionary", please read the first paragraph of my post, and notice how there isn't a single language in your list which is in all 7 categories.

Additionally:

- Erlang does not implement CSP, it implements Actor model

- Java does NOT have all the listed features included in its default toolkit - hence the existence of Gradle, Maven and all other packaging/testing/benchmarking solutions

- The "until mid 1990's" is the keyword here - I'm talking about modern languages and I explicitly pointed that out

- ACT is not part of any language, it is an external tool that may or may not be reliable, but definitely does not have toolchain/standard library level of quality/stability guarantee.

- "Until the repo changes" - packages can disappear from any system, see leftpad incident

- "forbids distribution of binary libraries" - not true, see [0]

[0] https://docs.google.com/document/d/1nr-TQHw_er6GOQRsF6T43GGh...

---

However, if I have misread your tone, and your post was intended to be an informative list of languages Go was inspired by, then thanks for the information. But some of it is misleading or false.


My opinion on Go's "innovation" is well known on HN, and gonuts back when I cared pre-1.0.

I could go over those points, one by one into detail, including Russ Cox point of view on disabling binary distribution, but not feeling motivated to press the further the wound.


I have no idea who you are nor do I care about internet pseudocelebrities, sorry. Your opinions, to me, are just words from a random stranger, whose merit is only insofar as I can learn something new from them.


No issues, I also don't care.


Unfortunately, the sort of soft trolling he keeps doing is allowed on here. It reflects extremely poorly on the mod team and is frankly just pathetic.


> and notice how there isn't a single language in your list which is in all 7 categories.

Implementing, sometimes quite poorly, all 7 categories does not a revolution make.

Golang looked at the past 60 years of programming evolution and decided it needed almost none of it, and ignored any developments in programming languages of the past thirty years or so. This is not revolutionary. It is, at best, reactionary.


Topics where Go has failed:

  - No tail call optimization
  - No meta-programming
Both do exist in Scheme since 1975.


The way concurrency works is pretty unique amongst mainstream languages.

Java has just copied some parts of how concurrency works in Go, but that's nearly 20 years after Go was released.

It's extremely easy to start up code concurrently with "go foo()". You can start up lots of such functions concurrently, as it works in userspace. Like async code, but no "colored functions" problem.


Actually Java brought back the green threads model that it had before Go came to be.

The difference is that now red and green threads are exposed at the API level, and not an implementation detail.

Hardly copying Go.


> but that's nearly 20 years after Go was released.

Go 1.0 was released in 2012.


Quoting "colored functions" is a problem of skill. It is a tell of engineer's lack of understanding of concurrency.


Can you please elaborate why?


The concurrency model basically came from Erlang, which in my opinion does it better.


Colored functions is only a problem in JS


And Python... And Rust... And C#...


And C#, Python, Rust...


You can de-color an async function by blocking.


In most implementations, blocking in an async function has unintended side effect of blocking all other async function running on the same executor.


It's not the case in C#. It is discouraged, but mainly because there just used to be so much sloppily written async code that managed to bring down threadpool to its knees despite hillclimbing and blocked threads detection doing a lot of heavy lifting, so the community has grown scar tissue against this. It's rarely an issue if ever in the last 5 years or so.


Really sad to see such an influential computer scientist lose interest in advancing computing for the perceived slight against his legacy.

Ironically I think Stroupstrup is actually doing more harm than good to his reputation by "evolving" C++ than simply putting it in maintenance mode and contributing to a modern language.


I agree. I think he could have done wonders starting fresh with something new.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: