Hacker News new | past | comments | ask | show | jobs | submit login

I'm interested in the original idea. Can you expand on how your ideal system would function?



Well, the hope was that Watson, having explored and built a connected knowledge graph from various sources, could ask probing, adaptive questions to find out where a person landed.

So, say I'm an undergrad at a good university and I tell the system "I'm interested in Computer Science. I am particularly interested in Scientific Computing and would like to get to a graduate level of knowledge."

The system might ask "Sort the following operations by their worst-case run-time...". Then, if they do well there, maybe "Which of these two examples of auto-parallelization using Matlab's parfor would fail to parallize the code.." or something like that. Over the course of so many questions, the system would start to paint a more and more reliable picture of where contours of a person's knowledge.

This is time-consuming, of course, but over time it would get easier and faster to find contours by using the 'average' of people with similar backgrounds as a starting point.

Once a fairly good mapping is done of the person to Watson's cognitive model, Watson would need to trace back to the source(s) of nearby concepts and offer them to the user, which, ideally, are then rated by the user for relevance and perceived difficulty to further refine the person's model and rank the material offered for that particular profile.

Now imagine a Grad Student asking a similar question. Or a middle school student. What would those interactions look like? The mappings? The suggestions?

Don't get me wrong...mapping a person's knowledge space is a Very Hard Problem. Watson takes a kitchen sink approach that just isn't possible for a human being. And maybe it wouldn't be possible to tease apart the resulting cognitive model into tidy nodes enough to map anything to. These were questions I'd hoped that IBM could help answer. Instead, it was on to the easy, well-understood problem and solution.


This already exists, it's called adaptive testing. I guess the new part would be modelling somebody's ability, not in one topic, but many related topics.


Sure. Existing adaptive tests were an inspiration for the idea, of course. But putting together a good adaptive test is, itself, very time-consuming. And the test itself doesn't do more than measure ability. The crux of the idea is to use the adaptive test to suggest learning material from a wide swathe of sources to the individual.


This is what Knewton is all about. knewton.com


Knewton is very cool, thanks for the link.

I'm assuming they break things down into traditional objective units within a standard curriculum and material that is relevant.

That's great but it is a very manual process.

Remember Yahoo curated keyword recommendations from the 90s? This Knewton approach is more like that. What I'd like to see is more like a Google Search in this analogy...something more flexible and comprehensive.


The basic idea is that you might start with a manual linkage between concepts, derived from linkages between content developed by subject matter experts, but over time it organically morphs into a knowledge graph that describes how certain concepts relate to and build on each other, and for which scenarios of learners.

That knowledge graph combined with a concrete goal (mastery level) and time to get there (deadline) can then be used to recommend to a specific learner what material to study or activities to do next.

It's theoretically possible to do this even more organically, but you need clearly tagged educational "stuff" (written content, lectures, activities, etc.) and, perhaps more importantly, clear ways to measure outcomes as a result of interacting with that "stuff" (typically quizzes which themselves have been clearly calibrated).




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: