Years ago, my college multi variable calculus and linear algebra courses were both taught primarily using course materials that were interactive Mathematica Notebooks.
We had access to all of the symbolic algebra tools and were even expected to use them regularly for both courses. It was great!
I'm not sure how well this would extend to introductory courses though, especially if the standardized tests still expect integration by hand.
Those same companies often invest in accessibility for vision-impaired users. I'm not sure you need a screen capture to scrape content when the site is designed to be navigable with a screen reader.
For anyone else who was confused to see a paper use the same name as a commercial product, it looks like Google Gemini was announced in May, whereas this was submitted to SOSP that had an April submission deadline.
It's not a good name to give to anything. Unless you're a corporate giant, name creativity is really important to making your work findable and re-findable.
> name creativity is really important to making your work findable and re-findable.
This is underrated information. I’ve seen so many products and even companies fail because the name led to millions of unrelated search results. Even if they are a giant it can still lead to bad outcomes.
I think this points more to how slow the paper submission process is compared to the product creation velocity. No wonder arxiv has been such a hit for the ML community.
GPU performance per dollar is only competitive for specific workloads. For extremely large scale compute, getting enough data center GPUs can also be challenging.
Lower counterparty risk, and harder to confiscate. Money can sit in a crypto wallet and be used for illicit transactions until favorable circumstances allow conversion out of btc. You can’t do that with cash in a bank.
With the amount of shady alt coins and defunct btc exchanges separating authentic vs bogus transactions is even tougher for regulators.
Alas, the problem with java, which I say as a begrudging long time java developer, is that "supports this distinction" is a theoretical benefit that is seldom used in practice. Checked and unchecked exceptions get so thoroughly abused and twisted into byzantine contraptions that any distinction, if value were to be gained from it, is completely destroyed by the common free form usage throughout the ecosystem.
The precondition thing, while indeed common, drives me sorta insane. I think it's a pattern java folks need to move on from. You've got this lovely type system (used loosely). If you need a precondition because you've got some fundamental invariant in the system, doing it at runtime rather than encoding it into the type system is such a missed opportunity. If I try to do something inherently wrong, I don't want the code to even compile!
This blog post really captures the core of where null checking should go and how to capture that you've already vetted this field for correctness in the type system so that the rest of your code never has to worry about it -- and further, cannot because the types don't allow!: https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-va...
This is echoed in an amazing book called Domain Design Made Functional, which radically changed how I thought about what a type system is and what it can actually do for us if we lean on it correctly (even a relatively crummy one like Java's!).
The problem is that while Java the language itself does support that distinction, a lot of built-in stuff really messes it up. For example, exceptions from closing a file are unexpected, but are an IOException which is checked anyway. Also, even the support that is in the language isn't first-class; e.g., lack of exception polymorphism.
I think that's a symptom of the fact that the distinction is really artificial at the language level anyway. Whether something is expected or not is a function of the requirements. Even OutOfMemory can be expected and handled in certain types of applications (esp. since it gets thrown for things like file handles rather than true memory). And then there are all kinds of cases where routine exceptions like file not found are in fact, unexpected errors (as discussed in TFA).
Perhaps some sort of language level solution could have been found (eg: have explicit interfaces to mark exceptions as expected or unexpected and then exceptions are assigned that using generics or something), but that ship has sailed long ago.
This is right, therefore, in most cases, a library should throw a checked exception, and the caller should decide whether it is an expected error and either handle it or rethrow it, or it is unexpected and rethrow a RuntimeException.
Unchecked vs Checked is one of the things I like least about Java. Programmers tend to make everything Unchecked because it leads to easier code for API users at the cost of correctness/error handling.
Modern Java, should not produce a lot of checked exceptions. Unfortunately, a large part of the standard library is 25 years old and still full of things that throw checked exceptions. If you use something like Spring or Quarkus, you'll not find a lot of those.
Kotlin improved on Java by treating all exceptions as unchecked. Including those from Java code. This was intentional and based on the observation that checked exceptions in Java were simply a mistake. Modern Java frameworks don't tend to use them for this reason. Kotlin fixed several other language design mistakes in Java and it's a reason it is used as a drop in replacement for Java in a lot of places. It also makes what guava and lombok do for Java completely redundant. All part of the language and standard library. Android, Spring, Quarkus, etc. they all become nicer to deal with when you swap out Java for Kotlin. I find dealing Java code to be very awkward these days. I used it for years and it just looks so ugly, clumsy, and verbose to me now.
The most common catch block in Java is e.printStackTrace() because that's what your IDE will insert. That's stupid code. And replacing it with a logger.error(e) is only marginally better. Idiomatic Java is actually re-throwing exceptions as RuntimeExceptions so your framework can handle them for you in a central place and show a nice not found page or bad request page (or the dreaded "we f*ked up" internal server error page). That too is stupid code to write and with Kotlin, re-throwing exceptions is not really a thing. Why would you? Either you handle the exception or it just bubbles up to a place where it is handled or not. If you want people to deal with exceptions, you wrap them with a a Result<T> in Kotlin. Java has a similar thing called an Optional but it is mostly just used to dodge null pointer exceptions; which in Kotlin are rare because it has nullable types. And of course it does not actually contain the original exception.
It's not clear to me that checked exceptions are actually a mistake, rather than just developers getting annoyed at their compiler forcing them to handle errors.
Just fyi, Scala predates Kotlin in not enforcing checked exceptions.
Checked exceptions require the implementation to distinguish between expected and unexpected errors. But as pointed out in the article, whether an error is expected or unexpected is more a function of the use case than the implementation.
That said, I've also seen plenty of competitive drama in FAANG research labs, so this story is not hard to believe. More senior engineers often will use their seniority to power-grab control of projects. It sounds like Google execs did the right thing in the end.
The key idea is to break code into "chunks" that each do one thing.
Then, if you have to add a new feature, it goes into another chunk, instead of editing/modifying existing code.
The same logic applies to system design at different scales, whether fine-scale OOP or coarser-scale (micro)service architecture. The ideal size of an individual "chunk" is somewhat subjective & debatable, of course.
It's like Haskell-style immutable data structures, but applied to writing the code, itself.
Microservices is just OOP/dependency-injection, but with RPCs instead of function calls.
The same criticisms for microservices (claims that it adds complexity, or too many pieces) are also seen for OOP.
Curiously, while folks sometimes complain about breaking up a system into smaller microservices or smaller classes, nobody every complains about being asked to break up an essay into paragraphs.
I don't think the paragraph metaphor works well since written works are often read front to back, and the organizational hierarchy isn't so important on such a linear medium. There are books that buck the trends and IMO you don't really notice the weirdness once you get going. E.g. books with long sentences that take up the whole paragraph, or paragraphs that take up the whole page, or both at the same time. Some books don't have paragraphs at all, and some books don't have chapters.
Splitting material into individual books makes a little more sense as a metaphor, especially if it's not a linear series of books. You can't just split a mega-book into chunks. Each book needs to be somewhat freestanding. Between books, there is an additional purchasing decision introduced. The end of one book must convince you to go buy the next book, which must have an interesting cover and introduction so that you actually buy it. It might need to recap material in a previous book or duplicate material that occurs elsewhere non-linearly.
A new book has an expected cost and length. We expect to pay 5-20 dollars for a few hundred pages of paperback to read for many hours. We wouldn't want to pay cents for a few pages at a time every 5 minutes. (or if we did, it would require significantly different distribution like ereaders with micropayments or advertising). Some books are produced as serials and come with tradeoffs like a proliferation of chapters and a story that keeps on going.
Anyway, it's a very long way to say that some splitting is merely style, some splitting has deeper implications, the splits can be too big or too small, and some things might not need splits at all.
[author] uses the [simile] to argue the [argument].
The obvious flaw in the [argument] is of course [counterargument].
[quote]: Curiously, while folks sometimes complain about breaking up a system into smaller microservices or smaller classes, nobody every complains about being asked to break up an essay into paragraphs.
[author]: Mr_P
[simile]: microservices or smaller classes are like paragraphs in an essay.
[argument]: since no one complains about breaking up an essay into paragraphs, no one should complain about breaking up a system into paragraphs.
[counterargument]: breaking up a system in smaller microservices or classes is not at all like breaking up an essay into paragraphs, which I think this comment has demonstrated.
> Curiously, while folks sometimes complain about breaking up a system into smaller microservices or smaller classes, nobody every complains about being asked to break up an essay into paragraphs.
There are orders of magnitude different amounts of work in each of these cases. (I’m not saying it’s a lot of work but it’s still significantly more in some of those cases relative to the others.)
Perhaps "break up your book into chapters" is a better metaphor for microservices. Breaking a chapter into paragraphs makes me think more of OO design or functional decomposition.
It’s breaking up into whole books. Each has is stored, distributed, addressed and built separately. You have to become an expert at making the implied overhead efficient, because it will dominate everything you do.
> Curiously, while folks sometimes complain about breaking up a system into smaller microservices or smaller classes, nobody every complains about being asked to break up an essay into paragraphs.
They would if each paragraph of that essay lived at a different domain/url.
A microservice contains many classes. Those classes are organized into packages and so many of them are necessarily “public.” The microservice boundary is a new kind of grouping, where even this collection of packages and public classes presents only one small interface to the rest of the architecture. AFAIK this is not a common or natural pattern in OOP and normal visibility rules schemes don’t support or encourage it.
We had access to all of the symbolic algebra tools and were even expected to use them regularly for both courses. It was great!
I'm not sure how well this would extend to introductory courses though, especially if the standardized tests still expect integration by hand.