I was reading a lot of technical books and kept highlighting things I wanted to remember — but I rarely went back to review them. The notes just sat there, on my Kindle or in the reading app.
So I started building something simple: a tool that lets me turn highlights into flashcards with as little friction as possible.
Just select text on your iPhone, share it with the app, and it creates a flashcard using AI — a Q&A pair and a short summary. You can browse cards in the app, or show them on your Home Screen, Lock Screen, or watchface of your Apple Watch.
This is my first iOS app, and building it has been a great learning experience. I’m using Supabase for the backend which have been mostly great.
I really like this. I'm going to grab some of my kiddos 7th grade biology notes (handwritten), and see what it does. I did notice that's it's a little hard to manage your deck: removing and shuffling are not obvious?
Let me know how it goes! I'm not always very happy with how the cards turn out, will see what I can do with some LLM prompt tuning.
Yeah, it's very barebones at the moment, if you swipe the name of a deck left on the list of decks, you should be able to delete it, and I haven't added the ability to delete individual cards yet. Shuffling is not possible, adding it to the backlog.
Given that it's summer-break and school is not the top of their thoughts, they're actually pretty thrilled. Here's what my youngest did: she took her hand-copied notes (from the book), took a photo with her camera, highlighted her notes, and had it created cards. She did this (rapidly) for a whole page of notes, and then had a card-deck to drill from.
I'm really glad to hear that! Two things I've thought about that would probably make the process even smoother:
- Create cards directly from camera input in the app.
- Create multiple cards from a source text, so that she can just supply a text body and the llm figures out what is important, and then the user can just discard cards they do not find useful.
You could... ? Or, it can just be what it is: addressable from "share". That has the nice property that if she gets a copy of notes from friends, or the internet, it's not "built in": it uses the functionality already in iOS?
I’m sure the algorithm can play a huge role in the effectiveness of learning but for me the difficult part was always creating the cards and actually opening the app to practice.
I've built Komihåg [1] to try and combat this: Select any text on your iOS device and a flashcard is automatically created for you, and the app is then showing you the cards on the Home Screen / Lock Screen / Apple Watch Face.
I haven't gotten to implement any sophisticated scheduling algorithm yet but will definitely do that eventually.
Cool idea. I’ll check it out. It would be cool to do something like this for single word highlights on an e reader. If I have highlighted a singe word it is because I want to add it to my vocabulary.
Your app looks cool! I've tried a few other apps doing something things, Clearspace is the one I'm using now. Will give yours a try!
I'm in a similar situation as you (developer having to do marketing) but have not gotten as far, so far I've only posted on a few subreddits and here on HN. Have you found any nice learning resources?
You select some text with your phone and share it with my app, then the shared text is reformulated to a flashcard (with the help of a llm).
You can then browse your flashcards in the app, but I’m also working on ways to show the cards to you with less friction: Like on the phone lock screen or on the face of your watch.
R1 and O1 points towards a world where training models will be a small bucket of the overall compute. That doesn't mean the total amount of AI compute will stop accelerating, just that interconnected mega clusters is not the only or most efficient way to run a majority of future workloads. That should be negative news for the company that is currently the only one that is capable of making chips for these clusters, and positive news for the players that can run inference on a single chip, as they will be able to grab more parts of the compute pie.
>Now, you still want to train the best model you can by cleverly leveraging as much compute as you can and as many trillion tokens of high quality training data as possible, but that's just the beginning of the story in this new world; now, you could easily use incredibly huge amounts of compute just to do inference from these models at a very high level of confidence or when trying to solve extremely tough problems that require "genius level" reasoning to avoid all the potential pitfalls that would lead a regular LLM astray.
I think this is the most interesting part. We always knew a huge fraction of the compute would be on inference rather than training, but it feels like the newest developments is pushing this even further towards inference.
Combine that with the fact that you can run the full R1 (680B) distributed on 3 consumer computers [1].
If most of NVIDIAs moat is in being able to efficiently interconnect thousands of GPUs, what happens when that is only important to a small fraction of the overall AI compute?
Conversely, how much larger can you scale if frontier models only currently need 3 consumer computers?
Imagine having 300. Could you build even better models? Is DeepSeek the right team to deliver that, or can OpenAI, Meta, HF, etc. adapt?
Going to be an interesting few months on the market. I think OpenAI lost a LOT in the board fiasco. I am bullish on HF. I anticipate Meta will lose folks to brain drain in response to management equivocation around company values. I don't put much stock into Google or Microsoft's AI capabilities, they are the new IBMs and are no longer innovating except at obvious margins.
Google is silently catching up fast with Gemini. They're also pursuing next gen architectures like Titan. But most importantly, the frontier of AI capabilities is shifting towards using RL at inference (thinking) time to perform tasks. Who has more data than Google there? They have a gargantuan database of queries paired with subsequent web nav, actions, follow up queries etc. Nobody can recreate this, Bing failed to get enough marketshare. Also, when you think of RL talent, which company comes to mind? I think Google has everyone checkmated already.
Can you say more about using RL at inference time, ideally with a pointer to read more about it? This doesn’t fit into my mental model, in a couple of ways. The main way is right in the name: “learning” isn’t something that happens at inference time; inference is generating results from already-trained models. Perhaps you’re conflating RL with multistage (e.g. “chain of thought”) inference? Or maybe you’re talking about feeding the result of inference-time interactions with the user back into subsequent rounds of training? I’m curious to hear more.
I wasn't clear. Model weights aren't changing at inference time. I meant at inference time the model will output a sequence of thoughts and actions to perform tasks given to it by the user. For instance, to answer a question it will search the web, navigate through some sites, scroll, summarize, etc. You can model this as a game played by emitting a sequence of actions in a browser. RL is the technique you want to train this component. To scale this up you need to have a massive amount of examples of sequences of actions taken in the browser, the outcome it led to, and a label for if that outcome was desirable or not. I am saying that by recording users googling stuff and emailing each other for decades Google has this massive dataset to train their RL powered browser using agent. Deepseek proving that simple RL ca be cheaply applied to a frontier LLM and have reasoning organically emerge makes this approach more obviously viable.
Makes sense, thanks. I wonder whether human web-browsing strategies are optimal for use in a LLM, e.g. given how much faster LLMs are at reading the webpages they find, compared to humans? Regardless, it does seem likely that Google’s dataset is good for something.
They pick out a website from search results, then nav within it to the correct product page and maybe scroll until the price is visible on screen.
Google captures a lot of that data on third party sites. From Perplexity:
Google Analytics: If the website uses Google Analytics, Google can collect data about user behavior on that site, including page views, time on site, and user flow.
Google Ads: Websites using Google Ads may allow Google to track user interactions for ad targeting and conversion tracking.
Other Google Services: Sites implementing services like Google Tag Manager or using embedded YouTube videos may provide additional tracking opportunities
So you can imagine that Google has a kajillion training examples that go:
search query (which implies task) -> pick webpage -> actions within webpage -> user stops (success), or user backs off site/tries different query (failure)
You can imagine that even if an AI agent is super efficient, it still needs to learn how to formulate queries, pick out a site to visit, nav through the site, do all that same stuff to perform tasks. Google's dataset is perfect for this, huge, and unparalleled.
How quickly the narrative went from 'Google silently has the most advanced AI but they are afraid to release it' to 'Google is silently catching up' all using the same 'core Google competencies' to infer Google's position of strength. Wonder what the next lower level of Google silently leveraging their strength will be?
Google is clearly catching up. Have you tried the recent Gemini models? Have you tried deep research? Google is like a ship that is hard to turn around but also hard to stop once in motion.
It seems like there is MUCH to gain by migrating to this approach - and it theoretically should not cost more to switch to that approach than vs the rewards to reap.
I expect all the major players are already working full-steam to incorporate this into their stacks as quickly as possible.
IMO, this seems incredibly bad to Nvidia, and incredibly good to everyone else.
I don't think this seems particularly bad for ChatGPT. They've built a strong brand. This should just help them reduce - by far - one of their largest expenses.
They'll have a slight disadvantage to say Google - who can much more easily switch from GPU to CPU. ChatGPT could have some growing pains there. Google would not.
> I don't think this seems particularly bad for ChatGPT. They've built a strong brand. This should just help them reduce - by far - one of their largest expenses.
Often expenses like that are keeping your competitors away.
Yes, but it typically doesn't matter if someone can reach parity or even surpass you - they have to surpass you by a step function to take a significant number of your users.
This is a step function in terms of efficiency (which presumably will be incorporated into ChatGPT within months), but not in terms of end user experience. It's only slightly better there.
One data point but my subscription for chatgpt is cancelled every time. So I made every month decision to resub. And because the cost of switching is essentially zero - the moment a better service is up there I will switch in an instant.
This assumes no (or very small) diminishing returns effect.
I don't pretend to know much about the minutiae of LLM training, but it wouldn't surprise me at all if throwing massively more GPUs at this particular training paradigm only produces marginal increases in output quality.
I believe the margin to expand is on CoT, where tokens can grow dramatically. If there is value in putting more compute towards it, there may still be returns to be captured on that margin.
Would it not be useful to have multiple independent AIs observing and interacting to build a model of the world? I'm thinking something roughly like the "councelors" in the Civilization games, giving defense/economic/cultural advice, but generalized over any goal-oriented scenario (and including one to take the "user" role). A group of AIs with specific roles interacting with each other seems like a good area to explore, especially now given the downward scalability of LLMs.
This is exactly where Deepseeks enhancements come into play. Essentially deepseek lets the model think out loud via chain of thought (o1 and Claude also do this) but DS also does not supervise the chain of thought, and simply rewards CoT that get the answer correctly. This is just one of the half dozen training optimization that Deepseek has come up with.
Don't forget that "CUDA" involves more than language constructs and programming paradigms.
With NVDA, you get tools to deploy at scale, maximize utilization, debug errors and perf issues, share HW between workflows, etc. These things are not cheap to develop.
Running a 680-billion parameter frontier model on a few Macs (at 13 tok/s!) is nuts. That'a two years after ChatGPT was released. That rate of progress just blows my mind.
And those are M2 Ultras. M4 Ultra is about to drop in the next few weeks/months, and I'm guessing it might have higher RAM configs, so you can probably run the same 680b on two of those beasts.
The higher performing chips, with one less interconnect, is going to give you significantly higher t/s.
Offtopic, but your comment finally pushed me over the edge to semantic satiation [1] regarding the word "moat". It is incredible how this word turned up a short while ago and now it seems to be a key ingredient of every second comment.
If you haven’t already: Start to store question and answer pairs and reuse the answer if the same question is asked multiple times.
You could also compute embeddings for the questions (don’t have to be OpenAI embeddings), and reuse the answer if the question is sufficiently similar to a prevously asked question.
I'm not sure it's practical and if it will result in any savings.
Wouldn't it be almost impossible to hit a duplicate when the users each form their own question?
Another issue I see is that these chat AIs usually have "history", so the question might be the same, but the context is different: the app might have received "when was he born", but in one context, the user talks about Obama and in another, she talks about Tom Brady.
If there are ways around these issues, I'd love to hear it, but it sounds like this will just increase costs via cache hardware costs and any dedup logic instead of saving money.
Also, do you really understand what the numbers in that spreadsheet mean if you have not been participating in pulling them together?