Hacker News new | past | comments | ask | show | jobs | submit | ryanglasgow's comments login

If you're looking for a solution that takes data privacy and security seriously you should look into Sprig (sprig.com). We work with many customers in the FinTech space, such as Square, who have extremely high data privacy standards.

Disclaimer: I'm the Founder/CEO. Just send me a note (see my profile) and I'm happy to get you setup or you can create a free account on our website.


You already posted a marketing blurb for your competing startup in this thread: https://news.ycombinator.com/item?id=30377349. Continuing to do that is excessive and distasteful, so please don't.

Because this is a YC startup's launch thread, I would normally hesitate to post like this (we moderate less when a YC startup is involved: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...), but I would say (and have said) the same thing in non-YC launch threads, and something about this case feels worse than usual.

Launch threads are a bit different from regular threads in this respect. It's of course fine for users to sincerely ask how the launching thing is different from existing things; it's borderline for a competitor to post a link to their thing, depending on how they do it; but to try to divert discussion to one's own thing is just bad manners.

I've detached this subthread from https://news.ycombinator.com/item?id=30375868 and marked it off topic.


Disclaimer: I’m the Founder/CEO of Sprig (sprig.com) - the industry leader for in-the-moment research.

Congrats on the launch! It’s great to see other companies emerge in this space. Product managers, designers, researchers too often rely on panels with people placed in hypothetical situations. Research is most valuable when conducted with actual customers as they are experiencing your product.

We haven’t seen the problem as you’re describing for smaller startups. It’s actually better to speak with customers directly 1:1 until your product starts to achieve scale.

One suggestion for you is to add video questions. This is a great way for early-stage startups to see and hear from users directly. It’s been so well received by startup founders that video questions are included in Sprig’s Free plan.

Also, the 50-70% response rate is suspect. Sprig has surpassed 2 Billion unique users tracked and millions of survey responses for customers including Dropbox, Loom, and Square. We’ve seen response rates as high as 90%, but on average are seeing a 30% response rate. 1Flow’s survey design is an exact clone of Sprig (see comparison: https://www.loom.com/i/356c650a70b94fffa9a85da83b546595) so differences in design won’t be a factor. Even a 30% response rate is significantly higher than an email survey though which is actually around 2-5%.


Hi Ryan, nice to meet you here. I have lots of respect for what your team is able to accomplish in terms of being able to fundraise at crazy speed and valuation, and building a user research product that gets a few large companies to try.

I've actually talked to a lot of startup founders, product leaders, and even current customers of Sprig, and learned that most didn't want to put video chats inside of their app because of how disruptive it is to the user experience. Zoom, UserTesting.com, etc. have a lot better ways of doing this and they've been doing it for years successfully. We think you're really serving big brands user research teams well because they really need video customer chats and because of our different approach to who we serve and our design philosophy, we don't yet see it a priority to add video.

We did months of customer research before building 1Flow - if your users are truly happy, we wouldn't exist.

With regards to the UI "clone" issue, I couldn't agree. There are already many tools such as Pendo, Appcues, Survicate, etc. that are using this approach, but as I had explained in my post, it is really about providing an experience both software makers and their users will love - at least that is the goal of 1Flow. Thank you for brining this to our attention, with regard to UI, I think we definitely can do a better job! :)

Our response rates are based on true data we see. We are a smaller startup trying to serve other startups of the world, and we are not serving enterprise customers at our stage. So I couldn't join you in making this a number competition, and also not interested in. All I can say is your 90% seems one-off, but I understand how things work and wouldn't want to take you up on this.

Finally I want to say that we are both trying to innovate in a space traditionally dominated by players like Qualtrics, SurveyMonkey, Medallia, InMoment, and 999 other survey tools. So I'd LOVE to stay connected with you and support each other however we can.

Kai

P.S. As founders we are all bit scared about competition, I understand. At 1Flow, we've tried our best to focus on actually delivering value to our users.

- AirBnB wasn't the first home sharing site

- Stripe wasn't the first payment platform

- Facebook wasn't the first social network

What really matters at the end of the day is finding product market fit and execute well.. This is just my 2 cents.


Great response. Looks like you guys are targeting two totally different groups. Sprig only has linked responses, which are the same as google forms, unless I "Contact Sales" which I am not going to do as a small startup.


UserLeap | Front-end Engineer | San Francisco, CA | Full-time | Onsite

UserLeap is modernizing customer surveys with artificial intelligence. Leveraging years of industry experience, UserLeap helps its customers uncover the most critical issues across their user base, helping to improve conversion rates and increase retention. No longer will companies need to rely on teams of people calling and surveying their customers.

This is your chance to join a startup in one of the most exciting phases, where you can become an early member of the team and play a vital part in our growth. We’re quickly signing larger and larger enterprises and looking for an experienced Senior Frontend Engineer to own and develop new features for our customer dashboard.

UserLeap is based in San Francisco, CA. The company raised a venture financing led by Hack VC. The CEO has been an early team member for 5 successfully acquired startups, including Weebly (acquired by Square), Vurb (acquired by Snap Inc) and Extrabux (acquired by eBates).

Interested? Shoot me a note and let's chat: ryan@userleap.com, or apply at https://jobs.lever.co/userleap


UserLeap | Front-end Engineer | San Francisco, CA | Full-time | Onsite

UserLeap is building the next generation of automated customer survey and analysis tooling for the enterprise. Leveraging years of industry experience, UserLeap helps its customers uncover the most critical issues across their user base, helping to improve conversion rates and increase retention. No longer will enterprises need to rely on teams of people calling and surveying their customers. UserLeap replaces the time-intensive and costly process that companies use today with an automated and dynamic solution.

This is your chance to join a startup in one of the most exciting phases, where you can become an original, founding member of the team and play a vital part in our growth. We’re quickly signing larger and larger enterprises and looking for an experienced Senior Frontend Engineer to own and develop new features for our customer dashboard.

UserLeap is based in San Francisco, CA. The company raised a Seed round led by Hack VC. The CEO has been early team for 5 successfully acquired startups, including Weebly (acquired by Square), Vurb (acquired by Snap Inc) and Extrabux (acquired by eBates).

Interested? Shoot me a note and let's chat: ryan@userleap.com, or apply at https://jobs.lever.co/userleap


UserLeap | Full-Stack Engineer | San Francisco, CA | Full-time | Onsite

UserLeap is the first AI-powered user researcher that automates customer survey and analysis for large software companies. These companies often have teams of people calling and surveying their customers and UserLeap replaces this process. This is your chance to join a VC-backed startup in one of the most exciting phases, where you can become an original, founding member of the team and play a vital part in our growth.

We’re quickly signing larger and larger enterprises and looking for an experienced full-stack engineer to develop new features for our customer dashboard. You'll be working closely with our highly experienced engineering team and have exposure to the development of our ML and NLP models.

Ideally you have experience with some of the technologies we've used is desirable. UserLeap is built with AWS, React, Node.js Postgres.

Interested? Shoot me a note and let's chat: ryan@userleap.com, or apply at https://jobs.lever.co/userleap


Thanks for the positive feedback! For those who don't have an iPad here's a video that previews how the app works: https://www.youtube.com/watch?v=cAuKRyuHQcc


When creating a new account on an ipad it took me three tries to pick a password that wasn't "too long". It would be helpful if the error message indicated the maximum allowable password length.

I enjoyed using the app. Easy to understand and super easy to get something published. I sent it to my mom, who was just this weekend asking how hard it would be to set up a website for her friend's small business.


Interesting read, but I would have to disagree. It's not difficult to reach 90% confidence with very a small sample size:

  - Variation A and B each receive 20 visits
  - Variation A receives 10 clicks while variation B receives 5 clicks
  - The confidence interval for Variation A is 90%
  (Source: https://mixpanel.com/labs/split-test-calculator)
Also, I wrote an article titled "Creating Successful Product Flows" that is very relevant to this post: https://medium.com/design-startups/c41ffbce49a1


Of course if you are A/B testing something which doubles conversions from 25% to 50% (100% improvement) you'll know quickly. However, if you're looking at something which is better by something more realistic like taking conversions from 5% to 5.5%, you're looking at around 10000 visits each for 90% confidence.


A startup isn't looking to make tiny .5% increment improvements so I don't see how this is relevant. Companies looking to grow a small user base are making significant changes, seeking significant improvements.


Your average well-crafted sales page on the internet has a conversion rate of 2.5%. A 0.5% increment is a HUGE difference. You're lucky if you get a 0.2% increment after an extensive A/B test.


Two things here. First, 90% confidence isn't great, I look for 99% confidence in running tests. Second, this assumes there is a lot of stuff you can test that produces 2x gains when in reality the number of things that do that is very small.

Its fair to A/B test things you expect to produce high leverage changes. That was actually part of the point of the article, no small tests. Focus here first, consumer psych helps you figure out where these opportunities are.

Once you get through these big opportunities though even respectable gains (e.g. 10%) take a lot of traffic to measure. For example, seeing a 10% gain in a 50% conversion rate takes around 2500-3000 visits to A/B test at 99% confidence. Seeing a 10% gain in a 10% conversion rate at 99% confidence takes 10 times more traffic than that.


> Two things here. First, 90% confidence isn't great, I look for 99% confidence in running tests.

Why? Why are you so worried about controlling false positives that you're willing to eat a whole bunch of false negatives?*

You're not administering expensive drugs to cancer patients, you're designing a website! If you mistakenly think that green buttons perform better than blue buttons when the actual truth is the null hypothesis that they perform the same, that's not the end of the world.

* and I do mean a whole bunch; in that scenario, moving from alpha=10% to alpha=1% means you increase your false negatives by something like 3x. The power calculations:

    R> power.prop.test(n=20, p1=0.5, p2=0.25, sig.level=0.10)
    ...
              power = 0.4951
    ...
    R>
    R> power.prop.test(n=20, p1=0.5, p2=0.25, sig.level=0.01)
    ...
              power = 0.1646
    ...
    R>
    R> 0.4951/0.1646
    [1] 3.008


There will be times when you make a change to a page and the difference in reception between the two pages is as stark as the situation you described above where out of 20 clicks, one page does twice as well. But most often there is a very minor difference between the click rate of the two pages, like less than 1%. In that case, you need a much larger samples size.

And even if you do get lucky and get a test like the one you described above, chances are, you want to continue to revise the page and make more subtle changes which will mean you need a much larger sample size even to reach the low bar of 90% confidence.


Can someone with expertise comment on this? I once worked in a company where the founders thought that the small samples were adequate. I thought that the calculators were misleading with such small samples sizes, even though they gave "high confidence".

But that was only based on my intuition, not math, and I've never seen anyone give a good discussion of whether "90% confidence" is as definitive as it sounds in the context of a very small sample.


It's a bit awkward to give a full answer to this, but this is to the best of my understanding and explained as simply as is reasonable:

A small sample has less statistical 'power' to identify significant differences where they exist. Put another way, a large sample is more likely to give a true significant result than a small sample.

But, if you do see 10% significance(/90% confidence) in a small sample, this is just as good as 10% significance in a large sample. Although the cutoff point will be more rough in a smaller sample, it's a good standard practice to round conservatively to account for this.

10% is unlikely to be considered a good result for statistics in either case - you can engineer a result by doing 10 tests on nothing and there's a danger you would have unknowingly or unconsciously done this, maybe (for example) by not deciding the sample size in advance. However, there's also presumably strong enough evidence against a harmful difference that you aren't likely to lose anything by following these results.

It can be good idea to do numerous small investigative tests as justification for bigger tests - relying on lots of small tests alone requires consideration for multiple testing (e.g. Bonferroni correction).


"But, if you do see 10% significance(/90% confidence) in a small sample, this is just as good as 10% significance in a large sample". That is not true, strictly speaking. You are assuming that small sample describes the underlying distribution well. But this may not be the case due to non-normality of the distribution itself or potential biases


Cool point and I agree.

The sample has to represent the population, that's fundamental. If the sample is so small that it can't characterise the population distribution, then you have a problem anyway. If you're measuring a events that happen 1% of the time (or 99% of the time), a sample of 100 is not nearly enough.

If you chose an appropriate non-parametric test to cover an unknown distribution with a small sample, it maybe would have zero power (impossible to give a significant result)


There's no such thing as a "small" or "large" sample size, per se. If you're doing it rigorously, you need to fix both your confidence interval (e.g., 95%) and the effect size you expect to see (e.g., a 50% lift in metric X relative to your control). You can then do some simple math which will tell you what sample size you need before there's only a 5% chance you'll see a 50% lift in metric X if you continue the test. Finally, you run the test until you've sampled that many users and stop the test. If there's a winning variant and it's statistically significant, congrats! If not, go back to square one.

The larger the effect size, the smaller your sample size can be before you reach that conclusion.

Most folks don't fix the desired effect size and instead just create a bunch of variants, start the A/B test, wait for the A/B testing framework to shout "statistically significant!", and then declare a winning variant. If the sample size seems "too small" they might not feel comfortable declaring a winner, so they perfunctorily "get a few more samples." Neither of these are rigorous, so it's a bit pointless to debate about which one is "better."


small sample sizes are misleading. You probably need at least 100 data points for reasonable significance, but if your data is skewed or has fat tails then most likely much more than that


> It's not difficult to reach 90% confidence with very a small sample size:

I think the difficulty in reaching 90% confidence is in designing a challenger that is THAT much better the original (i.e. 10 vs 5). Most split tests are shots in the dark. You'll basically need a design or copy that is doing pretty bad and an a challenger that is a lot better (but not obviously good enough that you use it in the first place).


This is what happens when you use free stock photos. The photo in question is taken from Unsplash (http://unsplash.com) and can be found if you scroll to the very bottom.


> This is what happens when you use free stock photos.

It happens often enough with not-free stock photos, as well. You're not paying for exclusivity in a lot of cases. I worked for a company that rolled out a front page with some office-themed photos which matched a nationwide office supply retailer's page.


This is akin to a new hire getting excited about a compensation package of salary + 100,000 shares. 100,000 shares, out of how many outstanding? 1 million shares? 10 billion shares? He raised $1.1M, but at what valuation? After reading the article, I was left wondering if the valuation was low. The team has not proven that they have reached product/market fit or found a scalable marketing channel which makes this a risky bet for investors.


Most likely a convertible note - the investment is taken as debt that converts into equity when the venture raises a priced round (viz. a series A). Neither party has to value the venture at the seed stage, which is usually for the best as it's too early to tell.

Congrats John!


I can't discuss the terms obviously. All I can say is that we felt they were very fair.


I mean, you can. You are CEO, and there is AFAIK no legal restriction on you releasing details of your funding. It's just not in your interest to do so.


I didn't expect you to discuss the terms, but the amount raised doesn't really mean anything given there's no context.


It doesn't mean nothing. There's a range of equity that will be given away during any round. I don't know how high the variance is but it's probably less than 30% and more than 10%.


How a company with no revenue or product can have a "low" valuation that results in a $1M+ investment is beyond me. Any value beyond $0 is hyperbolic. No revenue, no product, no proof, no value.

Yes, you can have potential value, which is of course what seed round investors invest in, but I find it astounding that there are seeds for technical products that reach even this size. You can produce a prototype for $100k. When it's time to market it, Series A.

I'd love to be happy for you but as someone who's spent a decade building a profitable business and SaaS that's still apparently worth nothing because we make money I can't accept that a $1m+ investment at seed round is a "low valuation". It's not. It's a very generous valuation. Until you have paying customers, your value is $0.

Please don't get me wrong - not trying to denigrate what you guys are doing, or to piss on your parade, but I just find the entire idea that people would say this is a low valuation mind-boggling.


iScroll is the industry standard for this. I've seen it used on a lot of web apps and it works pretty well.

http://cubiq.org/iscroll-4


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: