Hacker News new | past | comments | ask | show | jobs | submit login
Cyc (wikipedia.org)
357 points by mdszy on Dec 13, 2019 | hide | past | favorite | 173 comments



I worked for Cycorp for a few years recently. AMA, I guess? I obviously won't give away any secrets (e.g. business partners, finer grained details of how the inference engine works), but I can talk about the company culture, some high level technical things and the interpretation of the project that different people at the company have that makes it seem more viable than you might guess from the outside.

There were some big positives. Everyone there is very smart and depending on your tastes, it can be pretty fun to be in meetings where you try to explain Davidsonian ontology to perplexed business people. I suspect a decent fraction of the technical staff are reading this comment thread. There are also some genuine technical advances (which I wish were more publicly shared) in inference engine architecture or generally stemming from treating symbolic reasoning as a practical engineering project and giving up on things like completeness in favor of being able to get an answer most of the time.

There were also some big negatives, mostly structural ones. Within Cycorp different people have very different pictures of what the ultimate goals of the project are, what true AI is, and how (and whether) Cyc is going to make strides along the path to true AI. The company has been around for a long time and these disagreements never really resolve - they just sort of hang around and affect how different segments of the company work. There's also a very flat organizational structure which makes for a very anarchic and shifting map of who is responsible or accountable for what. And there's a huge disconnect between what the higher ups understand the company and technology to be doing, the projects they actually work on, and the low-level day-to-day work done by programmers and ontologists there.

I was initially pretty skeptical of the continued feasibility of symbolic AI when I went in to interview, but Doug Lenat gave me a pitch that essentially assured me that the project had found a way around many of the concerns I had. In particular, they were doing deep reasoning from common sense principles using heuristics and not just doing the thing Prolog often devolved into where you end up basically writing a logical system to emulate a procedural algorithm to solve problems.

It turns out there's a kind of reality distortion field around the management there, despite their best intentions - partially maintained by the management's own steadfast belief in the idea that what Cyc does is what it ought to be doing, but partially maintained by a layer of people that actively isolate the management from understanding the dirty work that goes into actually making projects work or appear to. So while a certain amount of "common sense" knowledge factors into the reasoning processes, a great amount of Cyc's output at the project level really comes from hand-crafted algorithms implemented either in the inference engine or the ontology.

Also the codebase is the biggest mess I have ever seen by an order of magnitude. I spent some entire days just scrolling through different versions of entire systems that duplicate massive chunks of functionality, written 20 years apart, with no indication of which (if any) still worked or were the preferred way to do things.


Two questions.

It's difficult to evaluate CYC given Cycorp's secrecy. Domingos called it something like a colossal failure, which is hard to believe, but it's hard to argue given how little info gets out of Cycorp. So does Cycorp have any interest in changing this? Do they care about improving their reputation in the AI community?

How would you compare Cyc (both the knowledge base and the inference engine) with current efforts in the semantic web? Is OWL/RDF just too simple to encode the kind of logic you think common sense needs?


1. There are internal and external factors when it comes to Cycorp's secrecy. The external factors come from the clients they work with, who often demand confidentiality. Some of their most successful projects are extremely closely guarded industry secrets. I think people at Cycorp would love to publicly talk a lot more about their projects if they could, but the clients don't want the competition getting wind of the technology.

The internal factors are less about intentionally hiding things and more about not committing any resources to being open. A lot of folks within Cycorp would like for the project to be more open, but it wasn't prioritized within the company when I was there. The impression that I got was that veterans there sort of feel like the broader AI community turned their back on symbolic reasoning in the 80s (fair) and they're generally not very impressed by the current trends within the AI community, particularly w.r.t. advances in ML (perhaps unfairly so), so they're going to just keep doing their thing until they can't be ignored anymore. "Their thing" is basically paying the bills in the short term while slowly building up the knowledge base with as many people as they can effectively manage and building platforms to make knowledge entry and ontological engineering smoother in the future. Doug Lenat is weirdly unimpressed by open-source models, and doesn't really see the point of committing resources to getting anyone involved who isn't a potential investor. They periodically do some publicity (there was a big piece in Wired some time ago) but people trying to investigate further don't get very far, and efforts within the company to open things up or revive OpenCyc tend to fall by the wayside when there's project work to do.

2. I don't know that much about this subject, but it's a point of common discussion within the company. Historically, a lot of the semantic web stuff grew out of efforts made by either former employees of Cycorp or people within a pretty close-knit intellectual community with common interests. OWL/RDF is definitely too simple to practically encode the kind of higher order logic that Cyc makes use of. IIRC the inference lead Keith Goolsbey was working on a kind of minimal extension to OWL/RDF that would make it suitable for more powerful knowledge representation, but I don't know if that ever got published.


Fun fact: the creator of RSS, RDF and Schema.org is Ramanathan V. Guha, a former leader at Cycorp (currently at Google).


Two easy ones for you:

1) How did they manage to make money for so long to keep things afloat? I'm guessing through some self-sustainable projects like the few business relationships listed in the wiki?

2) What's the tech stack like? (Language, deployment, etc)


1) The money situation has changed over the years, and they've had times where things have boomed or busted - it's been a while since I left but I think they're still in a "boom" phase. There are a lottt more projects with different companies or organizations than the ones listed on the wiki, but they tend to be pretty secretive and I won't name names.

The categories of projects that I was familiar with were basically proof of concept work for companies or government R&D contracts. There are lots of big companies that will throw a few million at a long-shot AI project just to see if it pays off, even if they don't always have a very clear idea of what they ultimately want or a concrete plan to build a product around it. Sometimes these would pay off, sometimes they wouldn't but we'd get by on the initial investment for proof of concept work. Similarly, organizations like DARPA will fund multiple speculative projects around a similar goal (e.g. education - that's where "Mathcraft" came from IIRC) to evaluate the most promising direction.

There have been a few big hits in the company's history, most of which I can't talk about. The hits have basically been in very circumscribed knowledge domains where there's a lot of data, a lot of opportunity for simple common sense inferences (e.g. if Alice worked for the ABC team of company A at the same time Bob worked for the XYZ team of company B and companies A and B were collaborating on a project involving the ABC and XYZ teams at that same time, then Alice and Bob have probably met) and you have reason to follow all those connections looking for patterns, but it's just too much data for a human to make a map of. Cyc can answer questions about probable business or knowledge relationships between individuals in large sets of people in a few seconds, which would be weeks of human research and certain institutions pay a high premium for that kind of thing.

2) Oh god. Get ready. Here's a 10k foot overview of a crazy thing. All this is apparent if you use OpenCyc so I feel pretty safe talking about it. Cyc is divided into the inference engine and the knowledge base. Both are expressed in different custom LISPy dialects. The knowledge base language is like a layer on top of the inference engine language.

The inference engine language has LISPy syntax but is crucially very un-LISPy in certain ways (way more procedural, no lambdas, reading it makes me want to die). To build the inference engine, you run a process that translates the inference code into Java and compiles that. Read that closely - it doesn't compile to JVM bytecode, it transpiles to Java source files, which are then compiled. This process was created before languages other than Java targeting the JVM were really a thing. There was a push to transition to Clojure or something for the next version of Cyc, but I don't know how far it got off the ground because of 30 years of technical debt.

The knowledge base itself is basically a set of images running on servers that periodically serialize their state in a way that can be restarted - individual ontologists can boot up their own images, make changes and transmit those to the central images. This model predates things like version control and things can get hairy when different images get too out of sync. Again, there was an effort to build a kind of git-equivalent to ease those pains, which I think was mostly finished but not widely adopted.

There are project-specific knowledge base branches that get deployed in their own images to customers, and specific knowledge base subsets used for different things.


"certain institutions pay a high premium for that kind of thing"

Applications in litigation support/e-discovery?


Current Cycorp employee here.

1) One-off contracts, sometimes with ongoing licenses, from various large organizations who have use cases for inference. We did a lot of government contracting for a while; now we mostly stay in the private sector.

2) An in-house dialect of Common Lisp that compiles to Java (it predates Clojure). Deployment is still fairly ad-hoc, but we build Containers.


> Common Lisp that compiles to Java

To Java bytecode or to Java code?


Java _code_. It's madness, though I'm assured it made sense at the time.


It makes more sense than generating bytecode since your “compiler” doesn’t need to comply to runtime changes other than high level API compatibility and you can leverage the performance improvements of which ever Java compiler is used to produce the final bytecode.


Java code is almost the same as bytecode, and is easier to debug.

That's a benefit of having a VM with JIT.


Java code, and then bytecode of course. Gives us meaningful stack traces, among other things.


Are there any real world applications that are built on Cyc's knowledge base and actually in regular use? Is Cyc critical to those applications or could they have been more easily built some other way?


Yes, I probably can't talk about them though. There are companies that use Cyc as part of processes for avoiding certain kinds of risks and the financial impact (by the company's estimation, not Cycorp's) is an unfathomably large amount of money. The thing I'm thinking of seems like something Cyc (or something Cyc-like) is relatively uniquely suited for. But for large scale systems, which thing is more easy in the long term is really hard to estimate with any confidence.

Really when it comes to practical applications using Cyc, there are three alternatives to consider and only two of them actually exist.

1. There are custom domain specific solutions, involving tailored (limited) inference engines and various kinds of smart databases.

2. There's Cyc.

3. There's a hypothetical future Cyc-like inference system that isn't burdened by 30 years of technical debt.

I personally suspect that some of Cycorp's clients would do better with domain-specific solutions because they don't realize how much of their problem could be solved that way and how much of the analysis coming from Cyc is actually the result of subject matter experts effectively building domain-specific solutions the hard way inside of Cyc. With a lot of Cycorp projects, it's hard to point your finger at exactly where the "AI" is happening.

There are some domains where you just need more inferential power and to leverage the years and years of background knowledge that's already in Cyc. Even then I sometimes used to wonder about the cost/effort effectiveness of using something as powerful and complicated as Cyc when a domain-specific solution might do 90% as well with half the effort.

If someone made a streamlined inference engine using modern engineering practices with a few years of concentrated work on making it usable by people who don't have graduate degrees in formal logic, and ported the most useful subset of the Cyc knowledge base over, that math would change dramatically.


> With a lot of Cycorp projects, it's hard to point your finger at exactly where the "AI" is happening.

Lol. It really sounds like none of the projects need Cyc. Sounds like the model is to bait smart engineers to work at an engineery company and then sell engineering consulting to companies who would never be able to land their own smart engineers.


IBM/Watson model.


1) Do you think it's possible that Cyc would lead to AGI?

2) Do you think it's possible that Cyc would lead to AI advances that are impressive to the layman like AlphaGo or GPT-2?


These answers are very personal to me. I joined Cycorp because Doug Lenat sold me on it being a more viable path toward something like AGI than I had suspected when I read about it. I left for a number of reasons (e.g. just to pursue other projects) but a big one was slowly coming to doubt that.

I could be sold on the idea that Cyc or something Cyc-like could be a piece of the puzzle for AGI.

I say "Cyc-like" because my personal opinion is that the actual Cyc system is struggling under 30-odd years of rapidly accruing technical debt and while it can do some impressive things, it doesn't represent the full potential of something that could be built using the lessons learned along the way.

But the longer I worked there the more I felt like the plan was basically:

1. Manually add more and more common-sense knowledge and extend the inference engine

2. ???

3. AGI!

When it comes to AI, the questions for me are basically always: what does the process by which it learns look like? Is it as powerful as human learning, and in what senses? How does it scale?

The target is something that can bootstrap: it can seek out new knowledge, creatively form its own theories and test them, and grow its own understanding of the world without its knowledge growth being entirely gated by human supervision and guidance.

The current popular approach to AI is statistical machine learning, which has improved by leaps and bounds in recent years. But when you look at it, it's still basically just more and more effective forms of supervised learning on very strictly defined tasks with pretty concrete metrics for success. Sure, we got computers to the point where they can play out billions of games of Chess or Go in a short period of time, and gradient descent algorithms to the point where they can converge to mastery of the tasks they're assigned much faster - in stopwatch time - than humans. But it's still gated almost entirely by human supervision - we have to define a pretty concrete task and set up a system to train the neural nets via billions of brute force examples.

The out-of-fashion symbolic approach behind Cyc takes a different strategy. It learns in two ways: ontologists manually enter knowledge in the form of symbolic assertions (or set up domain-specific processes to scrape things in), and then it expands on that knowledge by inferring whatever else it can given what it already knows. It's gated by the human hand in the manual knowledge acquisition step, and in the boundaries of what is strictly implied by its inference system.

In my opinion, both of those lack something necessary for AGI. It's very difficult to specify what exactly that is, but I can give some symptoms.

A real AGI is agentive in an important sense - it actively seeks out things of interest to it. And it creatively produces new conceptual schemes to test out against its experience. When a human learns to play chess, they don't reason out every possible consequence of the rules in exactly the terms they were initially described in (which is basically all Cyc can do) or sit there and memorize higher-order statistical patterns in play through billions of games of trial and error (which is basically what ML approaches do). They learn the rules, reason about them a bit while playing games to predict a few moves ahead, play enough to get a sense of some of those higher order statistical patterns and then they do a curious thing: they start inventing new concepts that aren't in the rules. They notice the board has a "center" that its important to control, they start thinking in terms of "tempo" and "openness" so-on. The end result is in some ways very similar to the result of higher-order statistical pattern recognition, but in the ML case those patterns were hammered out one tiny change at a time until they matched reality, whereas in the human there's a moment where they did something very creative and had an idea and went through a kind of phase transition where they started thinking about the game in different terms.

I don't know how to get to AI that does that. ML doesn't - it's close in some ways but doesn't really do those inductive leaps. Cyc doesn't either. I don't think it can in any way that isn't roughly equivalent to manually building a system that can inside of Cyc. Interestingly, some of Doug Lenat's early work was maybe more relevant to that problem than Cyc is.

Anyway that's my two cents. As for the second question, I have no idea. I didn't come up with anything while I worked there.


But the longer I worked there the more I felt like the plan was basically:

1. Manually add more and more common-sense knowledge and extend the inference engine

2. ???

3. AGI!

That's the same impression I had in the early days of expert systems. I once made the comment, "It's not going to work, but it's worth trying to find out why it won't work." I was thinking that rule-based inference was a dead end, but maybe somebody could reuse the knowledge base with something that works better.


Thanks for the answer!

> Interestingly, some of Doug Lenat's early work was maybe more relevant to that problem than Cyc is.

Yeah, Eurisko was really impressive, I often wondered why people don't work on that kind of stuff anymore.


While the last part of your comment is kind of messy, but this part I agree and find it interesting:

> in the ML case those patterns were hammered out one tiny change at a time until they matched reality, whereas in the human there's a moment where they did something very creative and had an idea and went through a kind of phase transition where they started thinking about the game in different terms.

Phase transition, or the "aha" moment, where things start to logically make sense. Humans have that moment. Knowledge gets crystallized in the same sense water starting to form a crystal structure. The regularity in the structure offers the ability to extrapolate, which is what current ML is known to be poor at.


Great comments by you and others here.

I was visiting MCC during the startup phase and Bobby Inman spent a little time with me. He had just hired Doug Lenat, but Lenat was not there yet. Inman was very excited to be having Lenat on board. (Inman was on my board of directors and furnished me with much of my IR&D funding for several years.)

From an outsider’s perspective, I thought that the business strategy of Open Cyc made sense, because many of us outside the company had the opportunity to experiment with it. I still have backups of the last released version, plus the RDF/OWL releases.

Personally, I think we are far from achieving AGI. We need some theoretical breakthroughs (I would bet on hybrid symbolic, deep learning, and probabilistic graph models). We have far to go, but as the Buddha said, enjoy the journey.


Thank you for this AMA, it was eye opening and made me think a lot about the organizational/tech debt barriers to creating AGI (or creating an organization that can create AGI).

I'm a ML researcher working on Deep Learning for robotics. I'm skeptical of the symbolic approach by which 1) ontologists manually enter symbolic assertions and 2) the system deduces further things from its existing ontology. My skepticism comes from a position of Slavic pessimism: we don't actually know how to formally define any object, much less ontological relationships between objects. If we let a machine use our garbage ontologies as axioms with which to prove further ontological relationships, the resulting ontology may be completely disjoint from the reality we live in. There must be a forcing function with which reality tells the system that its ontology is incorrect, and a mechanism for unwinding wrong ontologies.

I'm reminded of a quote from the Alien, Covenant movie.

Walter : When one note is off, it eventually destroys the whole symphony, David.


I am currently trying to build an AGI on my free time.

* it doesn't represent the full potential of something that could be built using the lessons learned along the way.*

the lessons learned What are those lessons? I would like to benefit from them instead to reproduce your past mistakes.


Cyc uses a dead-end 1980s concept of AGI (expert system ruleset) that led to the AI winter.


Neural networks also led to the AI winter. It wasn't until we had enough compute power that neural networks become popular again.

Some people think that reasoning is the same. If we had a database of enough common sense facts there would be a tipping point were it becomes useful.


What can Cyc do that other tech can't do? And, more importantly, is that stuff useful?


If there are current employees reading, they might be able to give a better answer than me. Basically, the project is to build a huge knowledge base of basic facts and "common sense" knowledge and an inference engine that could use a lot of different heuristics (including ones derived from semantic implications of contents of the knowledge base) to do efficient inference on queries related to its knowledge. One way of looking at Cyc from a business point of view is that it's a kind of artificial analyst sitting between you and a database. The database has a bunch of numbers and strings and stuff in a schema to represent facts. You can query the database. But you can ask an analyst much broader questions that require outside knowledge and deeper semantic understanding of the implications of the kinds of facts in the database, and then they go figure out what queries to make in order to answer your question - Cyc sort of does that job.

The degree to which it's effective seemed to me to be a case-by-case thing. While working there I tended to suspect that Cyc people underestimated the degree to which you could get a large fraction of their results using something like Datomic and it was an open question (to me at least) whether the extra 10% or whatever was worth how much massively more complicated it is to work with Cyc. I might be wrong though, I kind of isolated myself from working directly with customers.

One issue is just that "useful" always invites the question "useful to whom?"

Part of the tension of the company was a distinction between their long term project and the work that they did to pay the bills. The long term goal was something like, to eventually accumulate enough knowledge to create something that could be the basis for a human-ish AI. Whether that's useful, or their approach to it was useful, is a matter for another comment. But let's just say, businesses rarely show up wanting to pay you for doing that directly, so part of the business model is just finding particular problems that they were good at (lots of data, lots of basic inference required using common sense knowledge) that other companies weren't prepared to do. Some clients found Cyc enormously useful in that regard, others were frustrated by the complexity of the system.


Thanks for the reply. A 10% improvement in anything is usually immensely valuable but I know that you're using an arbitrary number in that 10%. I think the trick would be to make Cyc less complicated. It sounds like Cyc would do best sitting inside of a university or a foundation where they wouldn't have to worry about corporate clients. Or inside a massive tech company like Google where its budget would just be a drop in the bucket.


All the programmers at Cycorp, and most of the ones who've gone on to do other things, have a dream of re-writing the inference engine from scratch. Its just that those dreams don't necessarily align in the details, and the codebase is so, so, so big at this point that it's a herculean undertaking.


How do you deal with the fact that human knowledge is probabilistic? I.e. that it's actually mostly "belief" rather than a "fact", and the "correct" answer heavily depends on the context in a somewhat Bayesian way. Best I can tell we don't yet have math to model this in any kind of a comprehensive way.


Cyc has a few ways of dealing with fallibility.

Cyc doesn't do anything Bayesian like assigning specific probabilities to individual beliefs - IIRC they tried something like that and it had the problem where nobody felt very confident about attaching any particular precise number to priors and also the inference chains can be so long and involve so many assertions that anything less than 1 probability for most assertions would result in conclusions with very low confidence levels.

As to what they actually do, there are a few approaches.

I know that for one thing, there are coarse grained epistemic levels of belief built into the representation system - some predicates have "HighLikelihoodOf___" or "LowLikelihoodOf___" versions that enable very rough probabilistic reasoning that (it's argued - I have no position on this) is actually closer to the kind of folk-probabilistic thinking that humans actually do.

Also Cyc can use non-monotonic logic, which I think is relatively unique for commercial inference engines. I'm not going to give the best explanation here, but effectively, Cyc can assume that some assertions are "generally" true but may have certain exceptions, which makes it easy to express a lot of facts in a way that's similar to human reasoning. In general, mammals don't lay eggs. So you can assert that mammals don't lay eggs. But you can also assert that statement is non-monotonic and has exceptions (e.g. Platypuses).

Finally, and this isn't actually strictly about probabilistic reasoning, but helps represent different kinds of non-absolute reasoning: knowledge in Cyc is always contextualized. The knowledge base is divided up into "microtheories" of contexts where assertions are given to hold as if they're both true and relevant - very little is assumed to be always true across the board. This allows them to represent a lot of different topics, conflicting theories or even fictional worlds - there are various microtheories used for reasoning events in about popular media franchises, where the same laws of physics might not apply.


Thank you for the answer, I thought it was simpler than that, I'm glad the assumption was wrong.

I understand that any practical system of this kind would have to be very coarse, but even at the coarse level, does it have any kind of "error bar" indicator, to show how "sure" it is of the possibly incorrect answer? And can it come up with pertinent questions to narrow things down to a more "correct" answer?


I'm not sure I'm able to answer that in a satisfying way just because my memory is fallible. The degree to which the system is unsure of something (to the degree to which that can be coarsely represented) certainly shows up in the results, and I suspect the underlying search heuristics tend to prioritize things with a represented higher confidence level.

The latter thing sounds like something Doug Lenat has wanted for years, though I think it mostly comes up in cases where the information available is ambiguous, rather than unreliable. There are various knowledge entry schemes that involve Cyc dynamically generating more questions to ask the user to disambiguate or find relevant information.


What is your opinion about the popular trend of making everything probabilistic, especially in ML, in favor of default logic? For example, does it make sense to say "mammals lay eggs by 98%" because of the Platypuses exception?


Your analysis of Cyc is insightful and resonates with my own experiences.

Two general questions, if you don't mind:

1. How would you characterize the relationship between the politics and structure in your company?

2. Do feel that the layer of people actively isolating the top embodied the company's culture?


I don't work there anymore, though I know some folks that do. I suspect that they're reading this and don't want me to air out their dirty laundry too much.

Here's what I'll say: the degree of isolation between different mindsets and disagreement (that was typically very amicable if it was acknowledged at all) is emblematic of the culture of the company. There are people there with raaadically different ideas of what Cyc is for, what it's good at and even about empirical things like how it actually works. They mostly get along, sometimes there's tension. Over the years, the Cyc as its actually implemented has drifted pretty far from the Cyc that people like Doug Lenat believe in, and the degree to which they're willing or able to acknowledge that seems to sort of drift around, often dependent on factors like mood. Doug would show up and be very confused about why some things were hard because he just believes that Cyc works differently than it does in practice, and people had project deadlines, so they often implemented features via hacks to shape inference or hand-built algorithms to deliver answers that Doug thought ought to be derived from principles via inference. Doug thinks way more stuff that Cyc does is something that it effectively learned to do by automatically deriving a way to solve the general form of a problem, rather than a programmer up late hand-coding things to make a demo work the next day, and the programmers aren't going to tell him because there's a demo tomorrow too and it's not working yet.


> the degree of isolation between different mindsets and disagreement (that was typically very amicable if it was acknowledged at all) is emblematic of the culture of the company

A thoughtful perspective that is useful for my own understanding.

Thank you.


Do you hire philosophers?


Yes, quite a few. The prerequisite is basically that you have to be able to do formal logic at the first order level and also Doug has to be in a good mood when you do the interview.


1) Is first order logic not expressive enough for some use cases? Do you need higher order logic?

2) Where can I found a complete explanation of why Cyc hasn't yet been enough to build true natural language understanding, which technical difficulties needs to be solved? Examples would be welcome.

3) Is would be really nice if you showed progress in real time and allowed community to contribute intellectually. You could make a github repository without the code. But where we could see all the technical issues per tag, so we could follow the discussions and eventually share useful knowledge with you in order to accelerate progress.


I'm also a Cycorp employee, so I can say a little bit at least about (1) and (2).

1) We often use HOL. CycL isn't restricted to first order logic and we often reason by quantifying over predicates.

2) I don't know where you could read an explanation of it, other than the general problem that NLU is hard. It is something people at the company are interested in, though, and some of us think Cyc can play a big role in NLU.


Thanks. BTW, do you use some formal linguistic theories such as the ones from Noam chomsky?


I wouldn't say that we feel beholden to particular theories, but some of us have backgrounds in linguistics and we draw on those.


Knowledge bases should work in principle. There are many issues with filling them manually: a) the schema/ontology/conceptual framework is not guaranteed to be useful especially when done with no specific application in mind b) high cost of adding each fact with little marginal benefit etc. But I don't think it outweighs the issues of "pure" machine learning that much: poor introspection, capricious predictability of what you will get, and if you want to have really structured and semi-reliable information you will probably have to rely, at some point, on something like Wikipedia meta-information (DBpedia). Which is really a knowledge base with its own issues.

I think what really stopped Cyc from getting a wider traction is its closed nature[0]. People do use Princeton WordNet, which you can get for free, even though it's a mess in many aspects. The issue and mentality here is similar to commercial Common Lisp implementations, and the underlying culture is similar (oldschool 80s AI). These projects were shaped with a mindset that major progress in computing will happen with huge government grants and plans[1]. However you interpret the last 30 years, it was not exactly true. It's possible that all these companies earn money for their owners, but they have no industry-wide impact.

I was half-tempted once or twice to use something like Cyc in some project, but it would probably be too much organizational hassle. Especially if it turned out to be something commercial I wouldn't want to be dependent on someone's licensing and financial whims, especially if it can be avoided.

[0] There was OpenCyc for a time, but it was scrapped.

[1] Compare https://news.ycombinator.com/item?id=20569098


> if you want to have really structured and semi-reliable information you will probably have to rely, at some point, on something like Wikipedia meta-information (DBpedia).

Wikidata is also worth considering for that task. It is:

* Directly linked from Wikipedia [1]

* The data source for many infoboxes [2]

* Seeded with data from Wikipedia

* More active and integrated in community

* Larger in total number of concepts

Wikidata also has initiatives in lexicographic data [3] and images [4, 5].

On the subject of Cyc: the CycL "generalization" (#$genls) predicate inspired Wikidata's "subclass of" property [6], which now links together Wikidata's tree of knowledge.

---

1. See "Wikidata" link at left in all articles, e.g. https://en.wikipedia.org/wiki/Knowledge_base

2. https://en.wikipedia.org/wiki/Category:Infobox_templates_usi...

3. https://www.wikidata.org/wiki/Wikidata:Lexicographical_data/...

4. https://www.wikidata.org/wiki/Wikidata:Wikimedia_Commons/Dev...

5. See "Structured data" tab in image details on Wikimedia Commons, e.g. https://commons.wikimedia.org/wiki/File:Mona_Lisa,_by_Leonar...

6. https://www.wikidata.org/wiki/Property_talk:P279#Archived_cr...


I think you are correct about open availability being a large factor in something like Cyc not being widely used and adopted. Structured data sources like Metaweb (now merged with Wikidata), DBPedia, and Wikidata have high practical value, feeding into large knowledge graphs at Google, FB, etc.

I wonder what would have happened with Cyc if twenty years ago a funding manager at DARPA had provided incentives to have Cyc entirely open. This might have led to major code refactoring, many more contributions, etc. even understanding that adding common sense knowledge to Cyc requires special skills and education.


The the following utterance, sort of looks like the triplet data structure used in graph/knowledge databases:

"Alive loves Bob"

What do you know? Nothing. Was it Alice who said she loves Bob, or was it Bob who said it is Alice who loves him, maybe Carol saw the way Alice looks at Bob and then conclude she must love him. What is love anyway? How exactly is the love Alice has for Bob quantitively different than my love of chocolate. It might register similar brain activity in a MRI scan, and yet we humans recognise them as qualitatively different.

A knowledge base is useless if you can't judge wether a fact is true or false. The response to this problem was for the semantic web community to introduce a provenance ontology, but any attempt to reason over statements about statements seem to go nowhere. IMHO you can't solve the problem of AGI without also having a way for a rational agent to embody its thoughts in the physical world.


Agreed. Human thinking is arbitrarily high-order -- we use statements about statements about statements with no particular natural complexity limit. This seems to me the big limitation of knowledge graphs: The majority of real-world information, just like the majority of natural-language sentences, are highly nested relationships among relationships.

That was my motivation for writing Hode[1], the Higher-Order Data Editor. It lets you represent arbitrarily nested relationships, of any arity (number of members). It lets you cursor around data to view neighboring data, and it offers a query language that is, I believe, as close as possible to ordinary natural language.

(Hode has no inference engine, and I don't call it an AI project -- but it seems relevant enough to warrant a plug.)

[1] https://github.com/JeffreyBenjaminBrown/hode


Is it possible For a non-academic to get a ResearchCyc license? I’ve used OpenCyc in the past; now that I’m retired I’d like to look deeper into Cyc.


last time I looked at OpenCyc's knowledge base, the information encoded was all strangely specific academic stuff - like very fine classifications and relationships between species of tapeworms and of fungus. There was very little daily-life common-sense knowledge, even though that's often the hook in interviews and articles about Cyc's purpose. I'm not sure why that's true - maybe it's hard to decide what the 'facts' are about normal human life, but the more academic something is, the more there's a consensus, rationalized 'reality'


Employee of Cycorp here. A few thoughts:

- At least right now, we have a good amount of common-sense information about the world (I don't know when "last time" was for you).

- That said, we have a lot of highly specialized knowledge in various domains, so if you took a random sample of the knowledge base (KB) it may not be as common-sense-centric as you'd hope. But the KB is also incredibly large, so that doesn't mean we don't have much common-sense, just that we have even more other stuff.

- Often for contracts we get paid to construct lots of domain-specific knowledge, even if the project also uses the more general knowledge, so this biases the distribution some.

- Information that's already well-taxonomized is low-hanging fruit for this kind of system; its representation doesn't take nearly as much extra thought and consideration, so it's a faster process, which also biases the distribution some.


What's an example of something that feels just out of reach? What's the simplest thing that feels impossibly far away?


OpenCyc hasn't been a thing for something like a decade, so even if "last time" was yesterday, it'd still be on outdated information (because Cycorp keeps things proprietary, hidden and unauditable). Do you know when they last pushed it out? It's been a while.


It has indeed been a while, unfortunately. I don't know the exact date. Some of us here are trying to push for a revival of OpenCyc or something similar, to democratize things and get third-party developers playing with the system, but for now OpenCyc is not really supported.


Good luck! That's a really genuinely exciting prospect. I hope you succeed!


What are some interesting examples of common-sense that has been formalized and encoded?


One of our primary test suites is what we call "Common Sense Tests". They comprise a set of common-sense questions that require some leaps of reasoning to answer, and we use them as a metric of our common-sense knowledge. So for example:

  Would a human dislike touching a/an incandescent bulb while the electric lamp is powered on?

  Yes.

  ?HUMAN dislikes being a performer in the ?TOUCHING.
    • Embodied agents dislike performing acts that cause them discomfort.
    • ?HUMAN is an embodied perceptual agent.
      • ?HUMAN is a human.
        • Every human is an embodied perceptual agent.
    • ?HUMAN deliberately performed ?TOUCHING.
    • ?TOUCHING causes some discomfort.
      • Touching something that is too hot to touch causes pain.
      • The quantity range pain includes all points in some discomfort.
      • ?PART is too hot to touch.
        • When an incandescent bulb is on, it is too hot to touch.
        • ?PART is an incandescent bulb.
        • ?PART’s current state is powered on.
          • When a lamp with a bulb is on, so is the bulb.
          • ?PART is a physical part of ?DEVICE.
            • ?PART is a physical part of ?DEVICE.
            • ?PART is a physical part of ?PART.
          • ?DEVICE’s current state is powered on.
          • ?PART is a light bulb.
            • ?PART is an incandescent bulb.
            • Every incandescent bulb is a light bulb.
          • ?DEVICE is an electric lamp.
      • ?PART was affected during ?TOUCHING.
We have a couple thousand of these, which we've aimed to make as diverse as possible


So to "change a lightbulb", so to speak, the system decides something like "turn off the lamp first". But then the evaluation above says that the human would not dislike touching the bulb, but, in reality, it's still too hot.

So you could incorporate some kind of cooling rate, then change the above to "When an incandescent bulb has been on x of the last y minutes it is too hot to touch".

This all seems just impossibly complicated (not that I can think of something simpler!) - am I missing anything?


It is very complicated, yes. The goal is to be AI's "left brain"; the slower, more methodical (and explainable!) end of the spectrum. We see our place as being complementary to ML's "right brain", fast-but-messy thinking.

I will say also that our focus on "common sense" means we make deliberate choices about where to "bottom out" the granularity with which we represent the world; otherwise we'd find ourselves reasoning about literal atoms in every case. We generally try to target the level at which a human might conceive of a concept; humans can treat a collection of atoms as roughly "a single object", but then still apply formal logical to that object and its properties (and "typical" properties). In one sense it isn't a perfect representation, but in another sense it strikes the right balance between perfect and meaningful.


Thanks so much for the explanation!


> the information encoded was all strangely specific academic stuff - like very fine classifications and relationships between species of tapeworms and of fungus.

Even Wikidata mostly looks like that, despite being intended quite clearly as a "general purpose" knowledge base. Mostly because this sort of information is easily extracted from existing, referenced sources. The "general purpose" character of it all comes into play wrt. linking across those specialized domains.


This happens also with similar bases in my experience. I suspect it's very much an economic phenomenon. It's easy to hire someone to input relations from a textbook if not from Wikipedia. It's hard to find someone with credentials for "understanding everyday life on philosophical level", and for logical formalisms, and also no overriding penchant for philosophical bikeshedding. Even then, it would be a time-consuming and thus also costly process.


I am familiar with prolog and know what it takes (ish) to make an old school expert system. I have heard about this project. Are there any demos of this system? Like a video sales pitch. I have always wanted to see it in action.


I can't find it but I distinctly remember that there was part of an episode of 3-2-1 contact in the 80s about what must have been an early version of this system. It was the exact same thing as brundolf mentions* about the common sense system and how they set up the system to ask questions when contradictions arose. An example they used was it had asked if a human is still human when shaving. It is interesting that the system still exists.

Of course, I don't recall them mentioning any of the more dystopian things it could be (and sounds like has been) used for :/.

* https://news.ycombinator.com/item?id=21784105

On second thought, it might have been an Alan Kay presentation. I couldn't find that either but looking I did find this interesting Wired article from 2016:

https://www.wired.com/2016/03/doug-lenat-artificial-intellig...


Amusingly, or sadly, depending on your perspective, in practical settings the default behavior of that question depends on exactly what you want human to mean in the vocabulary of the local use case, because continuants are a very leaky abstraction if they are used to type biological systems, and while to our 'common sense' the type of human and human shaving should obviously be the same, when you get to questions about whether seemingly insignificant numerical differences in rates of catalysis constitute differences in type, then suddenly the distinction between "protein" and "protein wiggling slightly faster than usual" or "protein binding molecule a" (think "human holding shaver") becomes very much not obvious depending on exactly what question you want to answer. In the protein example if you black box the system, they can be fundamentally different. If human means predator, and your question is how dangerous is this human, then "human" and "human holding razor" becomes "agentous thing" and "agentous thing with sharp edged object" practically different things in very important ways if you are trying not to be filleted.


in my experience, most people dismiss cyc as a failed science experiment. this shouldnt be! after all, many important deep learning concepts have their roots in the 80s, and it is possible that cyc could be revived too.


Cyc itself probably won't be: proprietary information is something that modern scientists tend to know better than to invest time in. Symbolic AI, though, hasn't really died.


CYC effectively died the day OpenCYC died. There is no way that an entity that tries to catalogue human knowledge in this way will thrive on a closed set of data, there are only so many people working there.

Just like the Encyclopedia Brittanica has found its match in WikiPedia so CYC will find its match in something open. The engine - if the comments here are to be believed as still currently relevant - is a core that may be relevant and a huge number of domain specific hacks. Let's hope sooner or later CYC management comes to their senses and revives OpenCYC.


That is SUMO Ontology (http://ontologyportal.org)! It is open, in GitHub and people can contribute.


1) What do you think about hybrid approach: hypergraphs + large-scale NLP models (transformers)?

2) How far we're from real self-evolving cognitive architectures with self-awareness features? Is it a question of years, months, or it's already solved problem?

3) Does it make sense to use embeddings like https://github.com/facebookresearch/PyTorch-BigGraph to achieve better results?

4) Why Cycorp decided to limit communication and collaboration with scientific community / AI-enthusiasts at some point?

5) Did you try to solve GLUE / SUPERGLUE / SQUAD challenges with your system?

6) Is Douglas Lenat still contribute actively to the project?

Thanks


Doug Lenat is very much still active in the project. He doesn't do as much work building the ontology, but he plays a role in how various projects develop and provides a lot of feedback.


How do you compare with SOAR and opencog/atomspace?

Which6is the most promising AGI project according to you?


I've always thought that being able to model the physical world at multiple levels of abstraction was pretty essential for trying to interact with it in a less brittle way.

Moreover, having models of things that are interesting and relevant to humans seems pretty important for any system that interacts with humans.

And it always seemed reasonable that any system that aims to use natural language should be able to represent the meaning of the sentences it uses in a clear and understandable format.

Also "organizing the world's information" should make it usable in an automated fashion based on semantic models.


I immediately recognized the headline even though its been 15 years since I last read up on Cyc.

I still think the potential of lambda calculus in knowledge representation and logical deduction is high and under-represented in research.

Just theorizing, but I think a large part of the problem is the difficulty in interfacing this knowledge base with manual, human entry. Another pitfall is the difficulty in determining strange or unanticipated logical outcomes, and developing a framework to catch or validate these.


I have been working on that direction with Lean Theorem Prover (https://leanprover.github.io). There is also works using Coq (https://link.springer.com/chapter/10.1007/978-3-642-35786-2_...)


Cyc was the last holdout of GOFAI in the 90's, its premise being that the traditional symbolic AI paradigm wasn't wrong, but that it was just a matter of scale.


Almost all AGI projects use symbolic AI. It is a misconception to believe that connexionism has won, it only leads narrow tasks that help to build the higher level thing that is AGI.


My 2 cents is that I can ask just about any question I can think of and absorb and internalize an amazing answer in 5 minutes of reading. Many of those same questions can be automatically asked and answered too. The web and search engines are realizing the promise far better than anything else.


God I wish I felt this way. It's true that I can get an amazing amount of information in a short time, but I don't know how accurate or complete it is, and I don't even know how to find out.


At a surface level and when you know what questions to ask.


How does this compare to wolfram alpha?


I used to read about cyc here https://www.cyc.com/cycl-translations. But it says now "coming soon". Since we have folks from Cyc here, any ideas how soon?


While this approach might seem dated and strange, at some point something will begin to approach the ability to do general learning like a human. I just wonder how long we have to wait.


Eternity? If all we do is waiting...

Almost nobody is really working on AGI and this is the main issue. A notable counter example is the recent reconversion of John Carmack.


What if general learning is uncomputable?


General learning is uncomputable, it's called Solomonoff induction. You don't need general learning, you need something at least as powerful as the mess in a human brain.


Can you provide some references on "General learning is uncomputable"? Thanks.



What if the mess in the human brain is powered by an uncomputable transcendent mind?


first read this as "uncomputable translucent mind" and I loved the imagery


For me it would be helpful with some more examples of how to formulate the query and how the reasoning would look. Could someone share some examples of "common knowledge" that they think are cool?

Here are some common knowledge in English that I would love to see the system answer.

- Is a dog owner likely to own a rope-like object? (Yes, they likely own a leash.)

- Does the average North American own more than 1 shoestring? (Yes, most people have at least 2 shoes, and most shoes have shoestrings.)

- Is it safer to fly or to travel by car?


Is it possible that the inherent context dependent ambiguities of human language make knowledge-based inference so difficult since most current knowledge is stored in human language ?

tangential question: is there a standard language for "knowledge", like how we describe math for "computation" ?

Are a part of our brains essentially compilers from human language to an internal definition of "knowledge" that leads to consciousness ?


Attempto Controlled English[1] is a cool project: it's a formal language, but a subset of ordinary English.

My own Hode, described in an earlier comment[2], makes it easy for anyone who speaks some natural language to enter and query arbitrary structured data.

[1] https://en.wikipedia.org/wiki/Attempto_Controlled_English

[2] https://news.ycombinator.com/item?id=21784617


There are a bunch of "standards" for representing knowledge. E.g. https://en.wikipedia.org/wiki/Semantic_Web

[Edit] Here's a wider overview: https://en.wikipedia.org/wiki/Knowledge_representation_and_r...


Thanks for the link. It seems to talk about a knowledge graph type links between entities. It is however in a human language (here, english). I am interested in knowing if there's something analogous to "math" to represent knowledge.


The knowledge is represented in the links. Words don't inherently mean anything. The meaning of a word is how it relates to other words. The "math" of knowledge representation is in manipulating and searching the graph. The nodes can be named anything, because the names aren't knowledge, they're just tags on parts of the knowledge.


Most of human knowledge are represented in logic. Semantic web is designed from description logic. A less powerful logic than first order logic.



This Wikipedia article is clearly not neutral in tone. It reads like the CEO wrote it.


> require between 1000 and 3000 person-years [to input all the relevant facts in the world]

Which is laughably small in retrospect. I wonder what current estimates are.


Why is there never any fundamental research whether human intelligence is even computable? All these huge, expensive projects based on an untested premise.


There has been some philosophical speculating but that's generally not very actionable, with people clinging to either side of the question. On the practical side, it's the sort of thing which you can't just throw money at and make progress. Ok, you have $100mil to research whether human intelligence is computable. What do you do? Hire lots of humans and assign them noncomputable tasks and tap your foot waiting for one of them to turn out to be the next Oracle of Delphi? That's fantastic if one of them does, but if none of them do, then you've made zero progress: there's no way to know whether you failed because human intelligence is computable, or whether you failed because you chose the wrong tasks/humans.


But that's the sort of thing that should be researched: is the question scientifically answerable? The answer is not obviously no. I can think of ways to scientifically test for noncomputability. If I can then certainly much smarter and knowledgeable poeple can. People just assume like yourself it is not and throw lots of money at a certain assumption. If the assumption is wrong, not only is AGI a dead end, but "human in the loop" computation should be a huge win.


OK, what experiments would you design to test whether AGI is possible? Given the decades (centuries?) of thought that have gone into the issue, I'm sure a set of experiments would be valuable.


If humans can solve problems that require more computational resources than exist in the universe, then AGI is not possible. I have run one experiment to demonstrate this.


What was the experiment you ran?


Filling in missing assignments for a boolean circuit. In general it is an NP hard problem, and humans appear to do it pretty well at computationally intractable sizes.


Did you publish a paper on these experiments?

I'm not familiar with the boolean circuit problem, but I wonder if it's an instance where the NP hardness comes from specific edge cases, and whether your experiment tested said edge cases. Compare with the fact that the C++ compiler is Turing complete: its Turing completeness arises from compiling extremely contrived bizzarro code that would never come up in practice. So for everyday code, humans can answer the question, "Will the C++ compiler enter an infinite loop when it tries to compile this code?", quite easily, just by answering "No." every time. That doesn't mean humans can solve the halting problem, though.


There may be some way the problem set I used is computationally tractable, but I am not aware of such. I have not published the work yet.

But, the bigger point is why are not others doing this kind of research? It does not seem out of the realm of conceptual possibility, since someone as myself came up with a test. And the question is prior to all the big AI projects we currently have going on.


I'm not saying it's not scientifically answerable, just that hiring people specifically to answer it is not practical.

This type of thing usually comes through unplanned breakthroughs. You can't discover that the earth revolves around the sun just by paying tons of money to researchers and asking them to figure out astronomy. All that would get you would be some extremely sophisticated Copernican cycle-based models.

https://www.smbc-comics.com/comic/2012-08-09


Bell Labs made a bunch of breakthroughs that way, i.e. information theory.


There is plenty of fundamental research on it, probably a paper about it is published every week or so. The problem is that there is no general solution to the question, and everybody disagrees about how "human intelligence" should be defined in that context. The answers people give depend too much on untestable "philosophical stances."

Personally, I believe that AI is possible (hard AI thesis) and that computationalism with multiple realizability is right, since none of the philosophical arguments against hard AI and computationalism have convinced me so far. But there are as many opinions on that as there are people working on it.


Why wouldn't it be? It seems to me that at worst we would have to wait for computers to become as powerful and complex as a human brain, and then simulating human intelligence would be a matter of accurately modelling the connections.

Is there doubt as to whether a neuron can be represented computationally?


The mind may be nonphysical.


That's one position, but there are three problems with it:

1. You have to solve the interaction problem (how does the mind interact with the physical world?)

2. You need to explain why the world is not physically closed without blatantly violating physical theory / natural laws.

3. From the fact that the mind is nonphysical, it does not follow that computationalism is false. On the contrary, I'd say that computationalism is still the best explanation of how human thinking works even for a dualist. (All the alternatives are quite mystical, except maybe for hypercomputationalism.)


1. No I don't. I don't have to explain how gravity works to know that it does and make scientific claims about its operation. Likewise, I can scientifically demonstrate the mind is non physical and interacts with our physical world without explaining how.

2. If the world is not physically closed then physical theory and natural laws are not violated, since they would not apply to anything beyond the physical world.

3. True, but if the mind can be shown to perform physically uncomputable tasks, then we can infer the mind is not physical. In which case we can also apply Occam's razor and infer the mind is doing something uncomputable as opposed to having access to vast immaterial computational resources.

Finally, calling a position names, such as 'mystical', does nothing to determine the veracity of the position. At best it is counter productive by distracting from the logic of the argument.


I wasn't trying to argue with you, I merely laid out what is commonly thought about the subject matter. Sorry if that sounds patronizing (it's really not meant to). Anyway, if you want to publish a paper defending a dualist position nowadays in any reputable journal, you'll have to address points 1&2 in one way or another, whether you believe you have to or not. It's not as if that problem hadn't been discussed during the past 60 years or so. There are whole journals dedicated to it.

> if the mind can be shown to perform physically uncomputable tasks

That's true. Many people have tried that and many people believe they can show it. Roger Penrose, for example. These arguments are usually based on complexity theory or the Halting Problem and involve certain views about what mathematicians can and cannot do. As I've said, I've personally not been convinced by any of those arguments.

Your mileage may differ. Fair enough. Just make sure that you do not "know the answer" already when starting to think about the problem, because that's what many people seem to do when they think about these kind of problems and it's a pity.

> calling a position names, such as 'mystical', does nothing to determine the veracity of the position. At best it is counter productive by distracting from the logic of the argument.

That wasn't my intention, I use "mystical" in this context in the sense of "does not provide any better understanding or scientifically acceptable explanation." Many of the (modern) arguments in this area are inferences to the best explanation.

By the way, correctly formulated computationalism does not presume physicalism. It is fully compatible with dualism.


Yes, I understand computationalism does not imply physicalism, but physicalism does imply computationalism. Thus, if computationalism is empirically refuted, then physicalism is false.

I know the Lucas Godel incompleteness theorem type arguments. Whether successful or not, the counter arguments are certainly fallacious. E.g. just because I form a halting problem for myself does not mean I am not a halting oracle for uncomputable problems.

But, I have developed a more empirical approach, something that can be solved by the average person, not dealing with whether they can find the Godel sentence for a logic system.

Also, there is a lot of interesting research showing that humans are very effective at approximating solutions to NP complete problems, apparently better than the best known algorithms. While not conclusive proof in itself, such examples are very surprising if there is nothing super computational about the human mind, and less so if there is.

At any rate, there are a number of lines of evidence I'm aware of that makes the uncomputable mind a much more plausible explanation for what we see humans do, ignoring the whole problem of consciousness. I'm just concerned with empirical results, not philosophy or math. As such, I don't really care what some journal's idea of the burden of proof is. I care about making discoveries and moving our scientific knowledge and technology forward.

Additionally, this is not some academic speculation. If the uncomputable mind thesis is true, then there are technological gains to be made, such as through human in the loop approaches to computation. Arguably, that is where all the successful AI and ML is going these days, so that serves as yet one more line of evidence for the uncomputable mind thesis.


> physicalism does imply computationalism

That's not true either.

There are plenty of materialists who think the universe is not computable, thus it's totally possible to believe that the mind is not computable despite being entirely physical.


It's possible, so I should qualify it as our current understanding of physics implies computationalism.

So, if a macro phenomena, i.e. the human mind, is uncomputable, then it is not emergent from the low computable physical substrate.


If the mind were found to be uncomputable, I think you'd find vastly more physicists would take that as evidence the universe is uncomputable than that the mind is nonphysical.


So they may, but that would not follow logically. If the lowest level of physics is all computable then the higher physical levels must also be computable. Thus, if a higher level is not computable, it is not physical. We have never found anything at the lowest level that is not computable. None of it is even at the level of a Turing machine, unlike human produced computers.


Any chaotic system (highly sensitive to initial conditions) is practically uncomputable for us, because we have neither the computational power nor the ability to measure the initial conditions sufficiently accurately. Whether there is some lowest level at which everything is quantized, or it's real numbers all the way down, is an open question.

I don't think your argument will seem compelling to anyone who doesn't already have a strong prior belief that the mind is non-physical.


I would argue it is the other way around. If people are truly unbiased whether we are computable or not, then they would give my argument consideration. It is those with a priori computational bias that will not be phased by what I say.


You're right, but people tend to have strong priors one way of the other, often unconsciously. This is one of those classic cases where people with strong, divergent priors will disagree more strongly after seeing the same evidence. So if you want to convince people you'll have to try harder than most to find common ground.


And that's why I'm not concerned with convincing anyone. The proof is in the pudding. If I'm right, I should be able to get results. If not, then my argument doesn't matter.


Explaining how gravity works doesn't tell you whether gravity itself is a real thing, whether it is metaphysical, whether it's an epiphenomena of something else. People talk about it being curvature in spacetime vs. a force, but we're just reifying the math, right?

And I don't think we have a completely firm grasp on what is possible computationally with a given amount of physical resources, given the development of quantum computing.


The metaphysics are unimportant. The important question as far as AGI is concerned is whether human intelligence is physically computable. And quantum computation is less powerful than non deterministic Turing computation. So, we can bound quantum computation with NTMs.


AGI doesn't mean human-like. The idea is not to clone the "nonphysicality" of the human brain but vastly surpass it in raw intelligence.


Yes there is doubt. Can you say for sure that we have a complete model of all physics, and that all physics can be represented computationally? We're still discovering new features of neurons at the quantum level. Who knows how far down it goes. There may be some unknown physics at play inside neurons that can not be computed by a Turing machine. https://www.elsevier.com/about/press-releases/research-and-j...


There are aspects of quantum that we don't understand, but we have no reason to believe intelligence relies on them, any more than bridges do.


We actually do have reason to believe that, since our current understanding of consciousness is very incomplete. Human consciousness extends far beyond our current understanding. I am referring to the full extent of the capabilities of the human mind, not some isolated aspects of it.

The physics of bridges is well known. That is basically a solved problem. Human consciousness/intelligence is an open problem, and may never be solved.


> We actually do have reason to believe that [intelligence relies on quantum properties]

Are you leaving the reason unsaid, or am I in fact reading your argument correctly: "We don't understand consciousness, and we don't understand quantum, therefore it is likely consciousness relies on quantum." There's already plenty of mystery in an ordinary deterministic computation-driven approach to intelligence.


No I'm saying: "We don't have a perfectly accurate physical model of consciousness, we know that physics is incomplete, and our current model of neurons extends to the lowest levels of known physics, therefore there may be unknown physics involved in consciousness, and those unknown physics may not be computable."


In response to

> > we have no reason to believe intelligence relies on [as-yet mysterious aspects of quantum physics]

you wrote

> We actually do have reason to believe that ...

and later clarified

> [some true premises], therefore there may be unknown physics involved in consciousness, and those unknown physics may not be computable.

Saying something could be is different from saying we have reason to believe it. There may be a soul. Absent convincing evidence of the soul, though, we shouldn't predicate other research on the idea that it exists.


I clarified it in my latest reply above. The original comment asked if there is any doubt as to whether a neuron can be represented computationally. We don't know exactly what a neuron is, and are still discovering new subtle mechanisms in their functioning, and they are part of the most complex structure in the known universe, therefore of course there is doubt.


I think it’s pretty certain that we can improve a lot. If that leads to human intelligence or something else we don’t know. But it’s worth working on improving things and trying different approaches even if the final result isn’t known.


But there might be even better approaches if human intelligence is not computable. E.g. if the mind is a halting oracles that can get us all kinds of cool things.


If the mind were a halting oracle I don't think most of our open problems in mathematics would be.


It's possible for the mind to solve more halting problems than any finite computer, yet still not be as powerful as a complete halting oracle. Thus, the fact we haven't solved every problem does not count as evidence against the mind being a halting oracle.


Actually it does. While it's logically possible, evidence for a hypothesis A is still provided by any data that is more likely under hypothesis A than under hypothesis B.

The hypothesis that the mind is computable but is using heuristics, of various levels of sophistication, explains the data better and is more parsimonious than your hypothesis, because we already have reason to believe that the mind uses heuristics extensively.

Where you see uncomputable oracular insights, others see computable combinations of heuristics. If you introspect deeply enough while problem-solving, you may be able to sense the heuristics working prior to the flash of intuition.


In that setup the evidence makes the uncomputable partial Oracle the most likely hypothesis, since the space of uncomputable partial oracles is much much larger (infinitely so) than either computable minds or perfect halting oracles.


Well, no. That is the same kind of error as Zeno's paradox.

One assigns a prior to a class of hypotheses, and the cardinality of that set does not change the total probability you assign to the entire hypothesis class.

If one instead assigns a constant non-zero prior to each individual hypothesis of an infinite class, a grievous error has been committed and inconsistent and paradoxical beliefs can be the only result.


Sounds like then you can just arbitrarily divide up your classes to benefit whatever hypothesis you want, leading to special pleading. I think to remain objective one has to integrate over the entire space of hypothesis instances, using an infinitesimal weighting in the case of infinite spaces.


> integrate over the entire space of hypothesis instances, using an infinitesimal weighting in the case of infinite spaces.

Agreed.

However, when you write:

> the evidence makes the uncomputable partial Oracle the most likely hypothesis, since the space of uncomputable partial oracles is much much larger

you seem to argue that a hypothesis is more likely because it represents a larger (indeed infinite) space of sub-hypotheses. Reasoning from the cardinality of a set of hypotheses to a degree of belief in the set would in general seem to be unsound.


But there is. We have fundamental research into whether physics is computable. We also have fundamental research on the physical structure of human consciousness/intelligence. So first we need to discover the physical model of human intelligence, and then we can determine its computability.


Intelligence is a human property, yes, but also a Platonic one. We didn't need to understand how humans process math in order to get computers to do it.


As stated in my other reply: "Human consciousness extends far beyond our current understanding. I am referring to the full extent of the capabilities of the human mind, not some isolated aspects of it."

Computers have not superseded humans in mathematical research. That is way beyond anything that we can program into a computer. Computers are better at computation, which is not the same thing.


By "math" I mean proving theorems, not doing arithmetic. Yes, we're better at finding useful theorems, but computers can do it.

More generally, the fact that currently humans are the only entity observed doing X does not mean you need to understand humans to understand X.


I wrote "computation", not "arithmetic". Human intelligence goes beyond computation / mathematical logic, and you seem to ignore all of that. We haven't got a clue how consciousness works. It's a total mystery.


The aim of AI is intelligence, not human intelligence. It's not to emulate a process; it's to solve problems.

If we do build AI, maybe we'll never know if it's conscious. You can't know whether any other human is conscious, either. But you can know whether they make you laugh, or cry, or learn, or love. The knowable things are good enough.


What if the secret sauce that makes intelligence, the kind that invents AI, is consciousness? I, at least, certainly do a lot of conscious thinking when I solve problems, as opposed to unconscious thinking :)


"I, at least, certainly do a lot of conscious thinking when I solve problems"

That jumps out at me, because I do a lot of "unconscious thinking" to solve problems and I feel like I've read where other people describe similar experiences.

Besides the cliche of solving problems in your sleep, I sometimes have an experience where consciously focusing on solving a problem leads to a blind alley, and distracting my conscious mind with something else somehow lets a background task run to "defrag" or something. But on the other hand there is "bad" distraction too - I'm not sure offhand what the difference is.

It's possible that I'm far from typical, but I also suspect people of different types and intellects might process things in very different ways too.

But to me, I definitely have a strong sense much of the time that my conscious mind engages in the receipt of information about something complex and then the actual analysis is happening somewhere invisible to me in my brain. I'm frequently conscious that I'm figuring something out and yet unaware of the process.

It particularly seems weird to me that other people often seem to be convinced they are conscious of their thought processes, because surely the type of person who is not a knowledge worker isn't? I'm not sure if my way of thinking is the "smart way", the "dumb way", or just weird, but I'm sure that there is significant diversity among people in general.

Sometimes I wonder if the model of AI is the typical mind of a very small subset of humanity that's unlike the rest, kind of like the way psychological experiments have been biased towards college students since that's who they could easily get.


I've never solved a problem while completely unconscious. I've occasionally had insights while dreaming, and there is some intuitive aspect to thought that is difficult or impossible to explicitly articulate. But, every instance of problem solving I engage in is connected with conscious thought.


I don't know what "completely unconscious" means, but it doesn't sound like what I was describing.

I think I agree that my problem solving is connected with conscious thought, but the heavy lifting is mostly (or at least frequently) done by something that "I" am not aware of in detail.

When someone is explaining something complicated, pretty often, maybe not always, my (conscious) mind is pretty blank. I can say "yeah, I'm following you", but I feel like I'm not. Then when I start working on it, I feel like I am fumbling around for the keys to unlock some background processing that was happening in the meantime.

Also, when I am in a state where I am consciously writing something elaborate, and I feel connected to the complex concepts behind it, sometimes I get stuck in a blind alley. My context seems too narrow, and often I can get unstuck by just doing something unrelated to distract my conscious mind, like browsing news on my phone and then it's like a stuck process was terminated and I realize what I need to change on a higher level of abstraction.

It's possible I have some sort of inherent disability that I am compensating for by using a different part of my brain than normal, I suppose.


Every instance of problem solving I encounter involves conscious intentionality. As an analogy, when I get a drink from the fridge, there is a lot going on in my body to make that happen that I do not consciously control. But, overall it is taking place due to my conscious intentional control. I argue the same is going on in the mind, a lot of subconscious things going on that I do not directly control, but the overall effect is directed by my conscious control.


That doesn't seem like a good analogy to me, because intrinsically problem solving is about something you don't understand in the first place, whereas reaching for something you do already understand what you are doing.

If I use a mechanical grabber aid to reach something, then it isn't figuring out how to do anything. But if I ask Wolfram Alpha the answer to a math problem, it isn't me doing it.


Sure, it depends on what level your intentionality is involved. But, my experience is my intentionality is quite intimately involved with my intellectual processes. I cannot just will 'answer my math problem' and my mind pops out the answer. There is a lot of intentional, mental actions that take place to arrive at an answer.


But I wasn't talking about Artificial Intelligence or problem solving. I am talking about Actual Intelligence, specifically human level intelligence.

If we build AI we could only know if its conscious if we know what conscious is, and that is something we do not know, and perhaps will never know. It could be fundamentally beyond our comprehension.


I can't believe this is still a thing.

Then again, I felt the same way when I studied it at university almost 20 years ago. It was pretty obviously a pipe dream then, too.


I thought so on first glance too, but from what I've heard from someone who worked there, it's working software and it makes a lot of money. That's the opposite of a pipe dream, even if far short of AGI.


Is it making real money, or speculative money on the moon shot premise that AGI will rule it all if successful?

I worked at an AI company before, and it was the latter.


My understanding is mostly the latter, but definitely also the former, but it's based off of "I worked there, but our customer list isn't public so I can't tell you who" type statements like you'll see elsewhere.

If I had to guess what it's been actually used for, I'd wager it's money laundering or counter-terrorism type stuff; it's fairly well suited to finding connections between people and entities given a large data-set, and unlike many ML models, it can tell you why it thinks someone is suspicious, which might be needed for justifying further investigation. This is a completely wild-ass guess though so take with a giant grain of salt.


Yes that was the sort of usecase for the AI company I worked at, similar data mining competing with Palantir.


I think that this particular topic is evergreen because people are perennially surprised that this technology, which seems so reasonable and advanced at first blush, has failed to be useful in practice.


I spent a year doing an ontology postdoc. I can't speak for Cyc, but from what I saw of the ontology world, there are a lot of charlatans and people who are using it as a buzzword to make grant proposals sexier. Whether or not there's real potential in the technology, that kind of environment surely isn't conducive to achieving said potential. An outsider trying to peek into the field is immediately swamped by vast oceans of garbage, and everyone in the field is an expert at marketing themselves so if there are legitimate researcher gems in the field, I don't know how you'd actually find them from amidst all the noise.


This supply-side opacity is definitely a problem. There seems to be a corresponding demand-side problem, that clients often don't know quite what they want. If there was an easy way of generating hard tests with clear-cut answers, maybe there would be an easy way for a winner to distinguish themselves.


There's still no complete explanation on why it is a failure. What are the technical difficulties they can't overcome?


Maybe because the world and human knowledge are incredibly complex and difficult things to put into logical relations well enough to achieve more success?

Consider that humans learn though having bodies to explore the world with, while forming a variety of social relations to learn the culture. Which is very different from encoding a bunch of rules to make up an intelligence.


Creating a software that learns the real world indeed seems like a really hard problem without a body.

I was referring to why a software that parse the semantics of Wikipedia articles and make them queryable through natural language questions, is something that humanity isn't able to do?


That might depend on how much semantics is related to having a body. We do utilize quite a lot of metaphors that are based on the kinds of bodies and senses we have. The question here is how much embodiment is necessary for understanding semantics. Maybe it's possible to brute force around that with ML, or stack enough human hours into building the right ontologies in symbolic AI. But maybe not.

I think people like Rodney Brooks are of the belief you need to start with robots that learn their environment and build up from there.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: