Hi HN, we’re Matt and Mayank of Roundtable. We’re launching RoundtableJS, an open-source library (
https://github.com/roundtableAI/roundtable-js) to program research surveys at scale.
Academic and market researchers want a DIY platform for survey research. Most existing tools are no-code, and these researchers want programmatic software that handles advanced branching logic and customization. We know this because we spent five years using open-source survey software for our academic work.
One of the key insights of RoundtableJS is to leverage JavaScript’s async/await functionality for managing survey logic. In other libraries survey logic is determined via callbacks that trigger when a page is submitted (i.e., if the user answers “Yes” to Q2, show page 3), which quickly becomes a nightmare to manage and debug. Our library lets developers build the timeline in an async function which means the logic flows intuitively from top to bottom.
However, this fully programmable logic can be inaccessible to non-developers, and so our cloud offering (surveys.roundtable.ai) lets users program surveys using natural language by storing the library into context of an LLM (which acts as a library-specific copilot, demo link here: https://www.loom.com/share/7e7f30aa96244542a62d6c6ed125f2d5?...).
We think that being open-source will give us an edge in integrating LLMs into our platform. We designed RoundtableJS with LLM usage in mind, i.e. making it fully customizable and general-purpose. The alternative – building a library with a more constrained focus – would be more intuitive for unassisted programmers, but AI assistance makes it really easy to customize any part of the survey. Already we think this approach is easier and faster than no-code alternatives.
We left academia last year and have been building products in the survey space since. Our first product was an AI-powered survey simulator (https://news.ycombinator.com/item?id=36865625). People loved playing with it, but it was difficult to monetize (as many HN comments predicted). We abandoned this, talked to customers, and found that data quality is a huge problem in market research (e.g. https://www.kantar.com/company-news/kantar-partners-with-rea...). We released an API that analyzes open-end responses for low quality and fraudulent typing behavior. This helps firms save time cleaning data, but it is a limited market.
We plan to monetize through our cloud offering, which currently offers instant link deployment, an interactive code editor and a fraud detection suite. We plan to introduce more features here in the analytics, design, and productivity space.
Surveys are ubiquitous, but they are often bland and generic. By allowing people to fully customize them for today’s distribution channels, we want to make it easy to design better, more engaging surveys.
We just launched. Where can we improve?
Username: show-hn@news.ycombinator.com PW: Hackernews2024!