Asking developers to own QA is broken because developers are naturally biased towards the happy path. If you want to build a bar, you need someone to order -1 beers[1].
Handing off QA to an external team is broken because those people don't have the necessary experience with the product, nor can they quickly and easily engage with development to get to the heart of a problem (and a fix.)
Having QA rely exclusively on automation brings quality down as the application's complexity goes up. Writing tests to cover every possible edge case before the product ships isn't feasible.
The best solution I've seen in the decades I've been in software development has been to have QA and Dev as part of a team. Automation covers as much of the product as possible, and continually grows as issues are identified that can be covered by CI from here on out. QA become essential at the boundary of new features, and require a deep understanding of the product in order to know what knobs to turn that might break the application in ways the developer never thought about. Once those sharp edges are also automated, QA pushes the boundary forward.
I've also arrived at this approach and don't think it's that uncommon - IMO the article is presenting a false dichotomy.
There are still gotchas to look out for in the team-embedded QA approach. In typical team sizes, you often end up with only one QA per team - you need to make sure they have cover (everyone needs a break), and they need support in their discipline (do something to share QA knowledge across teams).
Absolutely. Disappointing to see this sort of shallow sales pitch blogspam making it to the frontpage. I'm surprised more HN readers don't see through this.
The entire purpose of this article is to self-servingly attempt to convince the reader that their product is the only solution to QA problems.
It correctly identifies some challenges with QA, but this solution is certainly not the only way to have effective QA. That Rainforest is resorting to such a disingenuous presentation of solutions to QA issues makes me think they probably don't actually solve QA problems very well.
"We have researched what makes QA successful and X, Y, and Z are what we found. Here's how we believe we're solving Z for our customers" would be a much more honest pitch.
I totally see your point here - it's frustrating that how I presented it undermined the impact of the core point, which is that siloed ownership of quality is an anti-pattern, and an anti-pattern which is often a response to the limitations and design principles of quality tooling.
We are to my knowledge the only product built for this kind of cross-functional ownership, that's why I don't recommend other products.
While I agree the post is too marketing-y - to be honest I didn't expect it to appeal so much to the HN audience - it was written in good faith. We built the product because after 7 years of building a product for one of those silos, we became convinced that the only long-term durable solution was to empower everyone to own quality. Clearly I need to figure out how to strike the balance that you outline in your last sentence, which was the goal!
Oh c'mon. Of course a blog on the website of a company is ultimately trying to pitch their product, one way or another. That does not invalidate the points they bring up per se.
It obviously needs a bit of critical thinking when reading it and taking everything with a grain of salt, but that's something I really hope we can expect people on HN to be capable of?
QA also works best for features that the user can meaningfully interact with. The more esoteric, interconnected, or complex the failure is, the more it gets muddled with other reports and weird conditions (it breaks on 3pm when the moon is in equinox and I have my mouse pointed north). That’s an over exaggeration but the sentiment is accurate. QA will frequently lack a deeper understanding of how the software works and is connected. That ignorance can be valuable (finds blind spots) but has trade offs (assumptions are made based on imperfect observation of cause and effect where human biases lead you down dark hallways). Speaking from experience with lots of manual QA which is actually a rarity these days.
The other thing to consider is, if you’re careful, user feedback can be obtained more quickly. If you can keep things stable, then you won’t upset your users too badly and you’ll avoid the need for QA to act as a stand in.
I agree, this all makes sense. Although I think the team-embedded QA is generally the right thing, I wouldn't use it blindly in all cases. Some teams I manage only produce HTTP API's, these are ideal candidates for automated testing (incl. end-to-end integrated tests) and the developers are happy to own this without a QA on the team.
Agreed it's presenting a false dichotomy. The article ends with a pitch for Rainforest's "no-code" QA platform so it makes sense they want to present their product as the clear solution. I could take the article more seriously if it at least mentioned a more integrated approach.
Most projects I've worked on have been games where the studio was split into teams of about 6-10 people with one dedicated in-house QA member for some but not all teams, a few in-house QA workers not assigned to a specific team, plus a much larger external QA team (from the parent company/ publisher). That has worked great.
I've also been on projects with only external QA. It works, but the lack of internal QA has been a frequent annoyance.
Yeah - in my experience 6-10 people per team is typical, often with one of them being dedicated QA. Having this, plus a separate central QA team is one way to address the pitfalls of embedded QA I was pointing out - they can cover for time away for team QA, or act as bench capacity.
>Asking developers to own QA is broken because developers are naturally biased towards the happy path.
I've been arguing the same thing for a while now. There's a clear mismatch in incentives. If you hand off the QA to developers you might initially get less bug reports but that doesn't mean your software isn't garbage.
Some of the best QA engineers I've worked with were developers in a previous life but they've since spent years cultivating a very different way of thinking about software. Believing you can get your engineers to adopt that mindset overnight is pure fantasy.
[author here] You are absolutely right: the optimal setup is a cross-functional team of domain experts collaborating on solving the customer's problem. The article is meant to point out that siloing QA to either just dev or just QA is an anti-pattern that is unstable and will not work, long term, without significant pain.
I think the challenge is that, while we agree on the optimal setup, it's almost never done like this. There's lots of reasons why, but the main one (imo) is organizational scaling. The logic of scaling teams tends towards specialization, especially because developer time is extremely expensive.
There are many ways to address the problem of QA. From our perspective, if they don't broaden the ownership of QA from just developers or just QA engineers (which is where all the current products are targeted), they will exacerbate this specialization problem.
I think that to say developers should not be interest or responsible for quality is most certainly a cop out, there can and should be a level of pride of workmanship that is focused on the customer.
The main challenge with software is the possibility of unexpected regression in areas that are non-obvious when implementing a given feature. One of the more scalable ways of catching these regressions is via some sort of automated testing. I think that a QA support team can provide the tools and support that allow for teams to easily add end to end testing, which is vitally important when build large distributed software.
If the QA team is responsible for testing every feature from many teams there is definite risk of bus factor and finger pointing.
In the super ideal world the testing is specified via something like Cucumber and the orginal author could be the product owner with feedback from the engineer as he works to that specification. But I have never seen this though have heard it can happen, rarely.
Hi author, I found parts of this downright wrong, especially on the Cypress side, which is where I have the most familiarity:
>Compare this to code-based automation solutions like Selenium or Cypress, where a coder would need to manually update the code across all the relevant tests any time the product changes.
... Well, no. Those tools can have reusable chunks and references that work just like your steps and are organized in the same manner, so that things used multiple places are stored once and reused. That's a basic programming good practice. When a change is needed to, say, a login flow, you don't change 100 UI tests, you change the login function that those tests are all calling.
> Tests created in code-based automation solutions like Selenium and Cypress evaluate what’s going on in the code behind the scenes of your product’s UI. That is, these solutions test what the computer sees, not what your users see.
and
> For example, if the HTML ID attribute of the signup button in the example above changes, a Selenium or Cypress coded test that identifies that button by its ID would likely break.
This is all very disingenuous in my book. In these tools the tests interact with the rendered web page and _can_ be made to use IDs, classes, and other brittle stuff as selectors, but that's a naive approach that a professional wouldn't typically use, because the brittleness surfaces early and there are better patterns that avoid it. One of those is to use selectors based on something that itself should be present - eg, click a button based on its label being "Log In". If that test fails because the button label was changed or removed, that's a reasonable failure, because labels are important in the UI. In the case where we are selecting/making assertions about DOM elements that don't have meaningful user-facing labels, dedicated test-id attributes solve the brittleness problem. But by and large, if something is interactive to the user, it should have a label, so the fallback to test attrs for selectors is rare, and used for the case where we want to measure some specific effect took place that is not itself an interactive element.
It's unfair to suggest that these tools inherently create brittle tests.
> Rainforest QA tests don’t break when there are minor, behind-the-scenes code changes that don’t change the UI.
but
> It works by pixel matching and using AI to find and interact with (click, fill, select, etc.) the correct elements.
Look, maybe your tool is GREAT, I haven't tried it. But a functional UI test based on label contents may be more robust than something that breaks when the visual structure of a page changes, which happens itself pretty frequently. Maybe the AI is good enough to know that a bunch of stuff moved around but the actual flow is the same and can find its way through. With tests based on labels and dedicated selectors, devs can make massive structural changes to the DOM and not break any tests if all the workflows are the same.
I would sure love if this post contained more info on how the visual AI system of selecting elements is better than using labels, test attributes etc as is conventional in automated testing done by professionals. Likewise it would be good to compare with an actual competitor like Ghost Inspector, where non-programmers have been able to generate step-based tests like this for years. The main gripe I had with Ghost Inspector was that it creates brittle selectors and so tests need to be changed a lot in response to DOM changes, unless a developer gets in there and manually reviews/picks better selectors.
If what you have a is a tool that _makes more robust tests than Ghost Inspector_ but is _as easy for a PM to use_, then that is interesting.
I actually support this underlying idea completely - I love systems where non-developers can create and maintain tests that reflect what they care about in the UI. I even love it when tools like this create low-quality tests that are still expressive enough that a dev can quickly come in and fix up selectors to make them more suitable. Cypress Studio is a currently-experimental feature that looks promising for this too, allowing folks to edit and create tests by clicking in the UI, not editing code files. It's a good direction for automated test frameworks to explore.
I'm just really uncomfortable after reading this post. It strays beyond typical marketing hyperbole to be actually deceptive about the nature of these other tools and the practices of teams using them. Instead of highlighting the actual uniqueness of your product, you exaggerate the benefits of this approach vs other solution. Come on, you can do tests in parallel with many tools, and you can avoid captcha for automated testing in various ways. Some of the process points you make are fair but also, again, kind of weak, because sure "QA at the end with no consultation during planning and development" is bad, but that's inherently bad for well known reasons.
What you've said is basically "our tool is better than every other idea about QA, if those ideas are deliberately implemented in the worst possible way, and our tool is implemented in the best way". Well sure. But also, of course it is.
Sorry if this seems harsh. To give you the benefit of the doubt: Marketing copy walks a fine line when trying to be persuasive and you might not have expected to create this impression. It's also possible that you did some research into weaknesses of automated test frameworks and just don't understand that those things are compensated for quite easily and routinely, because maybe you don't have the background. I don't know, but I hope future materials are a little more grounded in reality.
We have a free trial - you should try it instead of criticizing theory, I would love to get your feedback on the actual product, here or directly (I'm fred@rainforestqa.com).
Regarding DOM interaction, you're missing the point. All automation that tests the front end code, regardless of how it attaches, is using a path that is different from your end user. The end user interacts with the application visually. That's why we test visually. A decrease in code-based brittleness is just a nice side effect. And as you note, this is a very high-level post outlining one key idea about quality ownership. You may be interested in this, which is one of our front end folks talking about why we believe testing visually is superior: https://www.rainforestqa.com/blog/the-downfall-of-dom-and-th...
We have been selling a QA solution for almost 10 years. In that time we've seen thousands of setups and directly worked with hundreds of teams. Your claim "weaknesses of automated test frameworks [...] are compensated for quite easily and routinely" is, quite simply, not true for the majority of engineering teams - few QA leaders, including proponents of Cypress, would agree with you.
> The end user interacts with the application visually. That's why we test visually.
Not all users interact visually. Selecting interactive elements through accessible labels, not based on visual appearance, is a better practice imo. I want important parts of the DOM that are critical to building a correct accessibility tree to be a part of the test, and to fail the test if we change something that makes the accessibility tree incorrect. Because that is the API for assistive technology and it communicates the user interface for lots of people. "Correct behavior" of an app or website includes "the accessibility tree accurately reflects the structure and nature of the content" and "form fields are labelled correctly". I might be in the minority in thinking that, but I believe it 100%.
Nothing I've seen so far (including the post you linked) suggests that the OCR-like approach can tell us anything about the accessibility tree.
The post does make a similar point to mine:
> If your app changes visually, your test will fail. This is of course a tradeoff and a conscious decision to couple your tests to your UI instead of your code, but the upsides bring much more value than that the downsides take away.
I disagree on the tradeoff. I can run screenshot diffs etc to catch unexpected UI changes, they are a little noisy, but I'm ok with that tradeoff because I care about more than just the visual appearance of the app.
A "visually correct" app with click handlers on divs and no semantic HTML is a liability (legally, maintenance wise, etc).. I'd like the E2E testing tool to assert that the app is working correctly, which does mean some assertions about the DOM are appropriate to me. I agree with the author of the linked post that "we want a solution that is not brittle in unwanted ways." We can be selective about what DOM things are important.
In the linked post the author says "In particular, DOM tests are tightly coupled to the structure of your code." and gives an example about a Selenium test that uses a brittle xpath that depends on a specific DOM structure.
Maybe I have not been exposed to enough of the industry to know that there are thousands of setups relying on flaky xpaths to target elements for testing. To me, it is not true that DOM tests are tightly couple to the structure of your code by default. It's a false statement made for marketing purposes and it is gross.
DOM tests "can be flaky", "are sometimes coupled to DOM structure" or whatever, is a fair assertion, but flakiness in DOM-driven testing is not a fact, it's a sign of badly written DOM-based tests. This is often the first thing I address in a code review of new tests written by somebody who does not write lots of FE test code, and they easily learn how to avoid it.
Maybe I'm wrong but it seems really really basic stuff to not create brittle selectors that fail tests for reasons we don't care about.
I like the OS-level interaction and agree that provides some advantages. I totally disagree that these advantages mean your solution wins at the "best way" to test, but it does clearly cover a different surface area than other automated solutions for E2E testing, and it seems like tests are pretty quick to knock out.
This solution could be a complement to other automated E2E tests, and I would see no reason that a PM or other party couldn't spin up and maintain some tests this way as quick-to-create set of tests to run against various builds, knowing that design changes will break them but that this is OK cause in theory it is quick to rewrite them.
But I couldn't see using this tool as the only E2E testing solution as though it is a superset of what Cypress/Selenium/Whatever tests are capable of. It is actually not a competitor with those tools. It's addresses different concerns with a little bit of overlap.
I'm happy to check out the free trial and see if I'm missing something, and eat crow if I'm being unfair here.
i don’t agree with this at all. i’ve seen qa automation team members work solely as a resourcing issue: that if you have X developers you always have X feature developers and never enough people to automate. it’s not that devs aren’t capabale. i fundamentally disagree with rainforests mission that you need “other people” to achieve success in testing. your top engineers have no problem succeeding here
For the first ~15 years of my career, multiple teams, the model was always that engineering was responsible for all the test coverage of all areas. It was wonderful. A separate dedicated QA team will never, almost no matter how good they are, have the same deep insight into the code both big picture and small picture. So they won't be able to exercise it via tests as well as the engineers who wrote it. I always thought, this is the only way that makes sense.
To my surprise in subsequent roles, I've seen many development teams, even good ones, who don't have that deep instinct to break their code via tests. Which was very alien to me. I've seen enough cases now to realize it's somewhat common. For such teams, separate QA engineers are a must.
I still feel having quality responsibility in the engineering team ultimately can offer the best quality outcome, but only if you can build a team with the right attitude about testing. If not, having a separate QA team is vital.
> Asking developers to own QA is broken because developers are naturally biased towards the happy path
This overlooks that "no customer calls with product issues that dev needs to get fixed asap on a Saturday evening" is part of the happy path to optimize for. If those things happen, the dev has deviated from the happy path, including the embarrassment in front of fellow devs "but that's really a case we should have tested". This is another of those details the article is lacking.
I would agree with this. In a past job doing UI work, I had a tester sitting across from me. As soon as I got things working reasonably, I would give her a build to test. It was great, because we found a lot of bugs super early, very shortly after the code was written...
Problem comes from BigCo approach to people. Such corporations would like to have a QA team that should be utilized to a 100%.
So that if they don't have much work on project X you just drop them into project Y.
In reality we all know that is just not possible, because people have to have domain knowledge and keep up with changes. It is not like if you come back to project after 2 months you will be proficient with it as the world changed.
This way if project X has slow times, dedicated QA will have less to do and BigCo has to eat that "loss".
In small startup it is easier to have QA incorporated into dev/product team because as software is constantly changed there is work to be done all the time.
Many years ago I asked a developer which was a bigger deal:
- Having an outage
- Missing a deadline
They answered: "oh having an outage is WAY worse". I then asked: "if that's the case, why do you push so hard to hit your deadlines with code you know and I both know is probably not ready?"
They didn't really answer at the time but it eventually dawned on me what's happening:
- odds of being yelled at if you miss a deadline: 100%
- odds of being yelled at due to an outage: unclear as it depends on the odds of an outage in general so let's say "less than 100%"
Therefore, they are "gambling" rationally by pushing for deadlines vs pushing out good code.
The point I'm getting to is that if the goal is "hit the deadline" vs "deploy code that runs in production for two weeks with no errors", QA is going to be less of a priority.
Couple this with the fact that many firms think of QA as "they have to be cheaper than devs!" and then compensate them accordingly, means that QA people are fighting down both the comp front and the incentives front.
I've seen this happen so many times that I'm not really sure why people are surprised by it any more.
(NOTE: you could say "well then testing should be automated" but you get into a similar argument on who is building and maintaining the testing frameworks).
> - odds of being yelled at if you miss a deadline: 100%
> - odds of being yelled at due to an outage: unclear as it depends on the odds of an outage in general so let's say "less than 100%"
To reframe this, the odds of a dev having to crunch to hit a deadline they're behind on is 100%, but the odds of any developer catching a support escalation or on-call page from an outage are usually way less, especially on larger teams, because it's rare for every dev to always be on the hook for escalations. That's why things like goalies and pager rotations exist; the perception is that saddling one person with the responsibility occasionally is better than splitting it to everyone all the time. One weekend of abject hell a month is better than four weekends of annoyance.
But when any developer can shirk ownership of an outage, they all effectively do. Even from a support perspective, that doesn't even make me mad — who wouldn't want to sleep in, ignore Slack on weekends, and not feel dread every single time your phone pings with a notification?
On the other hand, teams _never_ let developers off the hook when there's a deadline that might slip. If you don't have something to do, you're pairing off to help someone else who does, or if you can't then you're more likely to be working on the next thing down the pipe so there's not as much deadline pressure, than supporting on stuff (like tests! and docs!) that won't be considered tech debt until someone (probably support!) hits something related to it and calls it out later.
Dedicated QA doesn't lift the outage ownership problem, it helps mitigate it before it happens. But QA teams that deflect outages struggle to provide data-driven reasons for their existence, because they can't track a negative, and credit for _n_ 9s of uptime is always split multiple ways such that nobody gets a big piece. QA winds up forever underappreciated because their wins are so much harder to count, but the times QA causes a deadline to slip are _always always always_ flagged.
Nevermind that outage response pulls engineering resources off hitting deadlines... so that becomes a self-perpetuating cycle...
The best route is to never have deadlines. Just convince sales and marketing of this and you're golden. /s
That (developers only being responsible for a fraction of bad rollouts they personally cause) reminds me of the water dynamic at a lot of apartments I've rented:
For "reasons" the water meters are per building rather than per unit, but the landlords are adamant that residents have to pay their fair share to ensure water isn't wasted. The scheme envisioned to meet those goals is that the total cost of water for a building is averaged out across all units (perhaps normalized by unit size). Looking at the net effect however, using $X of water only costs $X/N because my personal excess is split between the rest of the residents. Consequently, the entire building uses and pays for substantially more water than they would if the meters were more finely distributed.
This is a great description of the dysfunction in development work management. I've always said that deadlines are just made up numbers and the work will get done when it's done. However after leading a team I see how that attitude can lead to a ton of bike shedding instead of prioritizing and shipping features.
This is the classic case of development velocity pitted against operational stability. Entirely different incentives, and when vested into the same role that role is bound to prioritize one incentive over the other. For this reason I think they must be separated at least by body if not by team. They certainly need to separated by different managers.
I tend to think QA is perhaps well situated alongside Ops("DevOps"), and very close to Product + Design.
We were a small and good team developping hardware/software combos.
We ruined a demo to a client by promising some hard to do feature, and during the demo, the said feature had not been well tested for a particular environment.
The debriefing of that failure was memorable. The big boss was yelling at us, saying we should do better, work harder, longer, whatever was required to succeed.
When this calmed down, I only asked one question: when going back to my desk, should I work on this new feature promised to some other customer, or test this old one for any combination of inputs/environments?
The response was: "You do both"
I insisted that I will do both, but which one first?
He responded with some blabla I do not remember, but no response to my question.
To any manager which cannot decide between feature and stability:
If you cannot prioritize, the dev will do it, with whatever information/incentive they have. You may not be happy of the result.
But that's just bad management. It's most of the time better to drop/delay some features, than compromising stability. If the manager doesn't get that, he/she is the problem. And not the software developer.
OTOH, a manager doesn't see outage - they give you an assignment and expect you, the developer, do to a good job. Outages are not because you missed a deadline, but because you didn't do the job that is specified on your job description.
I'm sure you can make analogies. If you get a new kitchen installed, you expect it to be done properly, if it's finished within the day or so they quoted for you but the doors fall off, you won't be happy. You never SPECIFIED that the doors should be firmly attached - you assume they will be, because you trust in the competency of the people installing it.
That's the bad management I'm referring to. I've seen a lot of bad management, but never actually that bad.
If software fails it's a team failure. QA is equally responsible for that, not just developers. Quite often there root of the failure is ambiguous specification (so whoever did that is responsible too).
And in the end the manager is also responsible, because he didn't do his job right (picking suitable people and coaching them, so they can do the job).
Why can’t the QA people just live on the dev team? Why do they need to be siloed away or not exist?
I had this in my past job. Have 1 QA per two developers, the QA sits with those two developers, and you are constantly telling them when a build is done and on staging. They write tests, do some manual checks, and then tell you how it is going relatively immediately. They also handle the problem of reproducing bugs and narrowing down exactly where they occur, which is not trivial.
For all the faults in that organization (including letting our test runner be broken for months), we didn’t put out a lot of bugs and we found all manner of strange edge cases very quickly.
The key part in that is communication. That’s makes all the difference whether it be with BA’s, QA’s, or the end user. Speeds up the development cycle and greatly reduces the number of bugs.
The best experience I had was on a team that had essentially 2 BA’s and 7 devs. There was constant communication to clarify actual requirements, devs would build automated tests off them, BA’s would test against the requirements and then a business user would do a final look over. All in all features were able to be released usually within a day and there would be days we’d get out 3 or 4 releases. Only in one case did a major bug get released to production and the root cause of that was poorly worded regulations which had a clarifying errata come out at the 11th hour.
For as many faults as that company had that caused me to move on I’ve yet to run across a team that functioned as well as that one did.
Communication is great until someone becomes unreasonable and doesn't want to do something. Trust but the chain-of-command must verify. Shouldn't need to, but it should be there as insurance.
People don’t just randomly become unreasonable halfway through. If they’d be unreasonable, they’d do so from the start. If it happens midway, there’s almost always some reason.
That said, I do I agree that the chain of command should always be aware of what’s going on, or have a reliable way to find out.
Yep. People aren't rational, compliant, cooperative actors 24/7. Agendas. Bad days. Illnesses. Family events. So then it's unreasonable and foolish to extend trust unconditionally, perpetually, and without auditing.
If all people were angels, no government (or whoever regulates) would be necessary.
If all people (and systems) were perfect, no backups would be needed.
I’ve experienced both, were the dev team also had to do QA, and a dev team with QA people.
I personally hated writing automated tests, but it did give me a much deeper understanding of the product. Especially when writing tests for features I wasn’t working on.
That said, having a dedicated QA person within the team is far more effective. The dev team can build features, while our QA person works with the business to come up with test cases and test data.
We had that in one of my assignments. It kinda works, but at the same time, they were a gatekeeper, a barrier, and usually a release was delayed for a day while they tested out their new testing approach and went through their excel sheets of things to test for.
I think the ideal team setup is one you allude to: a cross-functional team with experts from each domain collaborating together. My intent was not to say that can't happen - but that it typically doesn't, because of specialization and organizational politics. The vast majority of software teams have a siloed QA team. Why? That's for another thread :)
Makes a lot of sense to me. The best and highest quality software I've ever worked on (people could die if it failed) had the devs doing testing and writing automated tests, with an intermingled QA team doing further testing, and often jumping in on dev.
QA here, disagree. In this situation QA is focused on the quality of the new features, so they're already excellent when they go out into the current release.
do that too. Tests for new features also include regression testing, as does integration tests and automation CI/CD tests.
In our working agile environments, you know the team's velocity. Managers know the team's velocity. Points per story include QA work. If they want something faster in this sprint, something else has to be taken out. Firm but fair, and it works very well.
That's false. QA can also be concerned about growing new features in the next release. We have embedded QA's on our teams. They QA new features, as well as help to triage current bugs.
Anecdotal too, but I've been QA lead on a team for a 9 figure project in aviation, concerns and complexity very significant, and it was very successful.
I work at a startup so we have a small team. But for us it goes:
1. Develop some feature (me and the other engineer are full stack)
2. Send to other engineer for code review
3. Deploy to a staging site, with test data. Send to product girl to QA
If she finds a bug (or realizes her design doesn’t look as right in practice as she envisioned) she sends back with notes, we implement or fix them, and then start back at 1-2.
I think QA is different for every org but I believe there should be steps. At bigger orgs there should be QA for every feature developed (so each team should have a QA person) and then an integration QA of the current beta.
QA is different for every type of engineering organization.
It shouldn't be automatically thrown around or diffused as something tertiary that can be subordinate wherever it lands. That works in a 3 person startup, but not if you're building an aircraft carrier or an iPhone.
I agree in theory and there are certainly teams where this could happen, but what happened on this team was that the bugs were found so quickly than the fixes were also quick (I would always get a bug report the same day and often within the hour), making shipping stuff with few bugs relatively painless.
Was there pressure to keep releasing? Yes. But with the rapid feedback, it never became too onerous to get them done anyway.
> Why can’t the QA people just live on the dev team?
Because there is little incentive for devs to police themselves and there could be multiple dev teams spanning client/server that needs to be integrated and tested.
A slightly better org to own QA would be product team.
Sure there’s incentives for devs to police themselves. I’d rather catch problems early than have a to deal with a shitstorm from pissed off product owners and VPs when bugs are found in production. But our QA person is on another team. I’d much rather have them on my team.
Each team should have some QA resources. Server can be tested independent of client, etc. Then the QA staff work together to ensure appropriate systems integration tests.
Siloed QA team runs the risk of becoming a bottleneck as work from disparate teams comes through.
I've been a QA Automation Engineer for 15 years. This is the best thread ever on HN! I've read every single comment.
I've found the best approach is for QAs to be embedded right into the product team - where the product team manager is the same person for both Dev and QA. QA and Dev all review each others code. QA reviews the Dev unit tests, and Devs review the QA integration tests (API and UI). QA may not be able to provide in-depth reviews on dev code, but at least they can double check that unit test coverage is adequate.
As for UI automation, one point of friction that often happens is that QAs want unique IDs on each UI element. If the devs fail to provide those unique IDs, then QAs are often forced to do some convoluted workaround. However, there is a better option where QAs simply add the unique IDs themselves and request a code review from the dev. This works well because the change to developer code is benign and QA is not slowed down.
Thank you for your comment (useful and interesting information).
So it looks like your suggestion is to have 2 types of engineers in the team, where at least one of them is focused on QA + encourage bi-directional code reviews.
Could you tell a bit about the involvement of PMs in the QA process from your experience? What's working and what isn't?
The PM is busy doing their own job (writing new stories, managing old stories, talking to clients, etc). They typically only do a superficial amount of testing on new features (and usually only on their pet-features). The vast majority of testing comes from QA and Dev.
Having QAs that technical would be amazing. Most of the time I find that anything more complex than basic JSON editing is the limit of their technical abilities.
Another QA/Automation engineer here - keep looking, we definitely exist! I was a dev but prefer QA, and still enjoy doing code reviews for the devs as well. Also helps to identify missing unit tests and consider what tests I'll be building if I've missed some too.
I've yet to see the automated test suite that replaces a skilled, sapient, human, functional tester. The automation takes away the drudgery of repeating tests, but it takes a skilled human to figure out what risks are in the code and figure out coverage to determine whether those risks are realized. If you have developers write good unit and integration tests, and build their work to their local environment to make sure it basically works, you avoid most of the "throw it over the wall" mentality. You also deliver way fewer needles in the haystack, if you will. Testers are free to look for more impactful bugs manually, and then to automate coverage as appropriate.
Writing code is generally easy. Figuring out what code to write is hard. I can spend 80+% of my day thinking about an overall system, what things are important, and how to design things so that they both work and don't prevent change later. Then sitting down and writing the code to do that is (usually) the easy part.
Writing automated tests is generally easy. Figuring out what tests to write is hard. Does the code break on normal cases? Does it break on edge cases? Does it handle all the known use cases? Does the implementation actually achieve what the client is asking for? Does what the client asked for actually make sense?
You can't just automate away testing. Automated testing (unit, functional, system, etc) is a power multiplier for both dev and QA... not a total solution.
Business thinks that you can do test automation in a vacuum without any domain knowledge and without knowing the system.
You always need people to understand the system, to understand the automation and to keep domain knowledge in house.
If your whole QA team leaves and you have automation (or even documentation) - you are still in deep shit because hiring new people and training them on the system will still be needed and only then they can learn to run the automation. They still need to spend time reading documentation and understanding it, because that something is written down or documented does not mean other person has the same understanding of it and the same reference framework to use that documentation.
Well you can hire some new people and let them run automation built by previous team but ... good luck with that.
I think you're pretty on point here; today human nuance is needed to decide WHAT to test as well as HOW MUCH to test. It's also useful for executing some types of tests too (which is why we support both automation, and human testing). IMHO Unit and integration tests should be a given. Human testers should be used with the highest leverage where possible. Being a functional-tester though, today is partly limited by what tooling is accessible to you - which we want to fix.
I've seen them used to do a decent smoke test to allow the QA people to make forward progress on other testing. I worked at a place that had a large number of computers running some testing software (robo something?) that ran through all the mainline scenarios. They constantly added new scenarios so the QA folks were working on much deeper flows until those too became automated.
Totally agree that automation will never fully replace the value of human-powered testing. (Though it is great for the rote regression-testing stuff. The "drudgery", as you put it.)
Isn't the problem with relying too much on unit and integration tests that they don't consider the end-to-end app experience? (Which, in my mind, is what ultimately matters when customers think of "quality".)
IMHO, yep - it's a balance, but the great thing is they can be quick to run, and run locally-easily; which is great for developers to get fast-feedback. Unit testing is unlikely to catch all end-user level issues though; traditional automation too, which is why human-testing is still valuable today.
Company from the article is not claiming that but all the business people are talking about it.
They would like that "test-automation" would mean instead of 2 QA engineers now you need 1 doing 2x more.
Sad reality is that with "test-automation" you actually still need those 2 QA engineers and maybe part time dev to help them out. The upside would be that those QA people would spend less time repeating clicks and improve quality of checks.
In a model where developers adhere to a devops philosophy and are product owners, there's no need for a split. Developers should not be measured only quantity of releases, but also on metrics related to availability, customer impact, latency, operational costs, etc.
I'm not opposed to a model where non-technical roles are empowered to define test cases and contribute to approval workflows of a release pipeline, but that doesn't absolve developers of the primary responsibility of being end-to-end owners.
I know "devops" is an overloaded term that means different things to different people. To me, it's when developers are involved in product research, requirements gathering, user studies, design, architecture, implementation, testing, release management, and ongoing operations. They're supported by a dedicated product manager who leads the research/requirements gathering/user studies initiatives.
This is how I've worked for most of the last decade. Empowered devs that are accountable for their releases and supported by metrics related to customer experience want good quality releases. Ownership works.
Agree but a lot of companies, particularly larger ones, don't have this sort of culture. They are filled with people whos sense of worth(at the company at least) is defined by a narrow job title and set of skills. Their worldview requires them to categorize everyone else on a narrow set of attributes as well:
"Oh, you're a programmer so you must not be good at soft-skills and communicating with the business."
"Ah, you have soft-skills and you also program? You must not actually be very good at programming so you should stop that and just become a PM."
"Ah, they are PMs/Sales/Whatever so SQL is too hard for them and they can't be expected to learn git or markdown; that's for nerds."
This leads to cringy, insulting sentiments like:
> The most valuable thing you can be doing is writing code
There's a little bit of an insane assumption in this pitch that PMs should be writing tests because they are the ones that "care a about quality."
With my engineering hat on, I hate this idea because the message that devs don't care about quality (but PMs do) is not how I want to operate.
With my PM hat on, I hate this because having the PM do test automation has got to be the lowest value activity you can ask the PM to do. That is, whatever the PM doesn't get to do because he's writing tests is almost certainly more valuable.
So yeah this pitch is nuts. If your system makes it easy to define tests with no code, awesome and that should be useful by the teams that do testing today. Weird that it isn't their angle.
A problem with having engineers own the tests is that if the engineer misunderstands how the feature is supposed to work, then both the code and tests will be wrong, but because they are consistently wrong the tests will be green. So the product owner still needs to do ad hoc testing to verify that things work as expected.
I like the “three amigos” approach from BDD as a solution here.
The product owner writes the initial acceptance criteria, ie specifies the product. Then sits down with an engineer and tester to edit the ACs into sensible Gherkin so they can be executed as tests.
The tester goes off and builds the tests (implementing the Gherkin steps) and the developer builds the feature. At the end you bring them together and if the tests pass, you are quite confident that you met the requirements. (If the error is not obvious you all huddle and figure out where the misunderstanding was).
The important thing here is that everyone gets together and makes sure they share an understanding of the feature. You could skip the Gherkin and write the tests some other way. But having the product owner think through the process of specifying the acceptance criteria cleanly is a good exercise, I think. Now I don’t think that means they need to own the tests… but there is a case for it not being the engineer that’s implementing the feature, for sure.
> The important thing here is that everyone gets together and makes sure they share an understanding of the feature
So true. Communication is everything, but because it's hard it gets short circuited everywhere. Where I work, devs write their own tests. It's as terrible as you can imagine while still having a successful product. I once suggested we split to a designer/verifier model...and got the weirdest blank stares. I honestly think it's because no one I work with wants to talk to each other.
Devs doing QA worked fine for us. You need to have a team that actually cares about the product, and you need figurehead devs - not necessarily seniors or team leads, but charismatic people others will fall into line with - who model the behaviours you want.
The problem with almost anything else is that it increases the number of hand-offs between groups of differently-aligned people on the route to production. If you're aiming at multiple deployments per day, with sub-hour commit-to-prod times, the latency in those hand-offs adds up way too fast.
My main problem with devs doing QA is that QA is huge, knowing the entire product is hard.
In my case, devs were just testing what they changed and never noticed something horribly broken somewhere else.
I think all devs should test that what they built works in production (you don't need QA for that), you need QA to keep everything else tested and working.
> devs were just testing what they changed and never noticed something horribly broken somewhere else.
There's your problem. If they broke it but didn't test what they broke, then they changed it but didn't realise they had. "Horribly broken elsewhere" sounds like the sort of thing that should show up in an automated test, no?
If the problem is "we don't have a good enough automated test suite and our architecture doesn't limit the blast radius of changes" then I can see how throwing QA headcount at it might look tempting.
A person doing manual testing, which is a gate to deployment would certainly be hard to make work for deploys as fast as you're targeting. It may still be valuable to have someone separate to the devs testing your product (in production, not as a gate to release) looking for things your automated testing, customer feedback or metrics may have missed. Whether this would be valuable is really context-dependent.
Wherever I worked so far as a software developer we had the QA department very close. In the same room or next door. They were there from the beginning until the end, tested our software continuously.
If we developed something, that was hard for them to test or to understand, they gave us a hard time. So we tried to make software testable. Do meaningful logging. Make things repeatable. Develop tools to export state/parts of the database, that can be restored. So you can actually test one feature and don't have to do 100 steps every time you need to test Feature X.
All that also greatly helped the developers, because a lot of companies develop software, that the developers can't even try out, because it only runs in a QA system they don't have access to. That's horrible.
This has been my experience as well. The places where testing was an afterthought meant the software was not testable easily. Executing multiple steps to test one feature meant QA folks kept running into one or other service issues causing delay in bug discovery. Rather than giving time to devs for fixing this, the management put more devs on test writing and were more interested in looking at dashboards.
I've seen this behaviour in outsourced software development. In the end they had 10 times the people they would actually need on projects. And they were mostly trapped inside deadlocks and doing nothing.
Just like if you try to parallelize a single threaded application without redesigning it. Just multiplying the core count of the CPU by 10 may bring you a few percent increase in speed, but nowhere close to the expected +1000%.
Sorry to sound harsh but IME I find - to be kind - naïve saying that developers are incentivized for release speed and at the same time pretending that product managers/owners WILL NOT have the same, exact incentive. I mean, I don't know in which dreamland company you are working but in the average company the ones pushing for new features and not caring about code quality are... PMs.
I know they have to sell their no-code product which is not targeted at the QA teams but this is too much...
In my experience (and I have a great deal of that), true product Quality is dependent upon an endemic cultural philosophy of an organization; not a single tool or technique. It's a shared discipline, and a cultural imperative.
It's like those ads for exercise machines, where professional athletic models, who train for five hours a day, and drink broccoli smoothies for lunch, are shown using a machine for a couple of minutes, with the inference that the machine is the reason for their washboard abs.
The Quality formula is hundreds; if not thousands, of years old. Software development is just another context.
There are significant costs to doing real Quality, which is the main reason that more people don't do it.
One of the biggest costs, is that there isn't really that much money to be made, doing superb Quality. You get to sell to snooty customers, and feel smug, but there just ain't that many people that are willing to pay the premium for top Quality.
Dreck sells. People who get truly rich, seldom do so, selling Quality. The ones that do, have that endemic culture. It's really difficult (and expensive) to maintain that; especially at scale. Lots of humbling and hiring expensive, cranky old craftsmen that can be difficult to work with.
You just need good enough quality and prevent total catastrophes that would drive customers away.
A lot of the time it is about making sure that stuff just works and not about testing edge cases - in my current company we did not really care about edge cases and I don't remember when we had an issue because of something like that. Mostly issues are popping up because people don't understand how things work or how they should work.
People not understanding how things work leads to "exercise machine" metaphor, because to keep all people updated with knowledge is hard work that needs to be done every working day and it is hard work that costs a lot of money. For this "test-automation" and "low-code" solutions are promising that you can put your domain knowledge into such "exercise machine" and have it regardless if you keep people or let them go, if they spend time learning about system or not. Which is a false premise just like "excercise machine" giving six-pack without much work.
First off, we totally agree on this: "true product Quality is dependent upon an endemic cultural philosophy of an organization."
What we're asking you to consider is how much of that is because the tooling is shutting out the roles who are naturally incentivized to care about product quality?
I think your broader thesis, while amusing, is ignoring some of the biggest and most valuable brands ever created... Apple, Rolex, Mercedes...
> I think your broader thesis, while amusing, is ignoring some of the biggest and most valuable brands ever created... Apple, Rolex, Mercedes...
Hey, thanks for the condescension. I thought we didn't do things like that on HN, but I am often wrong.
I actually worked for one of them "biggest and most valuable brands" for a couple of decades, so there's a good chance that my "thesis" might have legs.
Look, I actually agree with a lot of what you wrote, but I wasn't deliberately calling your baby ugly, and don't really appreciate the reaction. I did not mean to attack you, and am actually sorry that my comment was perceived as such. If I could delete the first line of my comment, I would. It's accurate, but I can see it as being inflammatory.
[EDIT] I'll see if @dang, or someone, can delete it.
Thanks @dang. It's gone now. I 100% stand behind the remaining text. If y'all dig around in my HN handle (which might have been helpful before taking a swipe at me), you might be able to figure out which company I worked for, and why this actually gives my "thesis" some level of veracity.
> Every minute spent doing QA is a minute not spent writing code
This is flawed reasoning. 80% of all engineering disciplines is in double checking your results. Bridges, space shuttles, heck even the chips. Only in software do we say that the 20% is 95%.
I think product owners should do (manual) QA. On their own, and when necessary for scalability reasons also via a small team under their direct control.
Yeah, this is the only thing that practically worked, in a complex product I used to work on.
- If you ask the devs to do QA, you'll get no bugs other than the ones they already caught during testing and deployment.
- If you have a mostly independent QA team, they will find somewhat silly/trivial bugs like the login page not working in an extreme edge case scenario.
- However, when you ask your Product team to own QA, you get the real good stuff - "Why does this feature not actually work well with this other feature when you combine the configuration" etc. It's great!
On independent QA teams finding trivial bugs, I think this is a social problem. Specifically an alignment one. If a bug goes out, whose fault is it? Eng for making it or QA for not finding it? The answers are different at different orgs, and they have more to do with power dynamics than anything else.
QA is pretty easy to fake (for a while). The last thing you want to do is quality check your QA team. So the further the incentives of the QA team are from the success of the product, the worse things get. It's a spectrum from the other cofounder doing QA to someone in another country charging by the hour.
There are opposing forces to this. It's inherently difficult for people to check themselves. Also, QA is its own skill. It's one more thing to ask devs to be great at. Maybe it's worth having some specific experts around. It's kind of a specific mindset that not all devs have.
This mindset issue is where I'm not entirely sold on what looks like a new strategy from Rainforest QA (where I used to work) of strongly targeting product teams. I haven't seen any QA tool so good that it removes the complete need for skill like indoor heating means you don't need to know anything about how to tend a fire. The best ones are more like how a modern gas range helps a chef. So I question how great the results will be if you have a PM or CSM doing it. Still, I wish them the best of luck.
Thanks for the luck Travis! Def something we've been edging around for a while. From what we've been seeing, a lot of PMs and no/low code folks already do this kind of thing manually, and don't have tooling for it. RF automation is now way better / easier to use than when you were with us (imho, of course).
We had a very large QA silo in a previous company. ~150 people all working under a former QA (non-developer) person turned into a "QA VP". It was a disaster in so many ways. Empire-building tendencies, large numbers of bad hires, etc.
The solution was to break up this silo and getting these people into the product teams where more technical people could handle management and hiring.
When I was a PM, I was lucky to have a big, talented QA team, but I still knew I'd have to do a "smoke test" myself after every major feature release. I cared the most, and I knew the most about the intricacies of the product.
Also: Bliss is when you as a product owner get to such a place that you trust the QA team that you've worked with so closely so much that you only have have to some basic tests a few hours after the release.
> - If you have a mostly independent QA team, they will find somewhat silly/trivial bugs like the login page not working in an extreme edge case scenario.
Fair but you have to keep in mind that part of QA's job is to explore the silly and extreme edge cases too. One of the nastiest bugs I ever found started as a joke of a test. What happens if you enter a massive long text string (I think it was something like 600 kilobytes) into the password field and enter it?
I expected the UI to barf and a bit of mischief. What I got was the entire server backend crashing and loosing every user session with it when it restarted. The joke test revealed a uncovered a serious denial of service attack vector that was trivial to execute, trivial to automate, difficult to detect, and incredibly effective.
I agree it would be silly if QA insisted that trivial or edge cases be fixed; every bug has to be judged on the cost of fixing vs the risk of it happening in the field. But finding those silly edge case bugs is still very well worth the time to explore. If not because chasing the edge can reveal serious problems, but also because it also helps define where the limits of sanity really are instead of were it is imagined to be.
>However, when you ask your Product team to own QA, you get the real good stuff - "Why does this feature not actually work well with this other feature when you combine the configuration" etc. It's great!
This works right up until you hit a complexity point product can't handle, or worse, you find out that product is building something completely different from the requirements SME's are giving.
Your QA group (or misguided product group) can do jack squat about badly translated reqs or the right questions not being asked, which in the presence of the best devs and QA's degenerates into building the wrong thing perfectly, which has to get redone over and over again. A good QA that's been around the block is usually pretty good at sussing out major communication defects, but it can be tough to pull out of.
It's a surreal class of miscommunication to behold, and as close a death spiral as you can end up in, because everybody day in day out is working hard and getting frustrated but no one's getting closer to the end goal.
> However, when you ask your Product team to own QA, you get the real good stuff - "Why does this feature not actually work well with this other feature when you combine the configuration" etc. It's great!
That would require a skilled product management that is on equal footing with sales and has a veto right before stuff gets sold to customers. All too often however, the only thing that matters is customer wishes for new features, with no one taking care that the tugs the customers are pulling on don't rip the product apart.
Having previously spent a decade in that role for a $1B company: I never got a veto right before stuff got sold to customers. I really, really wanted that in the beginning.
In the end what I ended up doing was spending a lot of time packaging the product as well as educating the sales force. If you don't have well-packaged products, the sales force will sell anything, even if it doesn't exist. And they'll get backed up by a typically slightly disconnected exec team, because they also want revenue really, really badly.
This strategy worked surprisingly well. I think both the sales team and the ever slightly disconnected exec team benefited from the artificial structure that I spent so much effort making up. I guess people instinctively just like following clear structures over chaos.
(This was 5-10 years ago, and not in Silicon Valley.)
+1 - I think they should be able to for sure - with current tooling, this is basically impossible without manually doing it; which I've seen a lot of product folks do!
In my experience what's worked the best is making the team that own the product (or feature or what have you) cross functional and let everyone own everything.
Of course everyone brings different skillsets, and of course everyone has falls into their natural domains accordingly, but it really shouldn't be that Bob and Alice have exclusive domain and responsibility over this and Jane over this and John over that. It's a product and a team for a reason.
this has been our experience as well - sadly, however, most (all?) teams specialize and silo as they scale, and so you don't tend to see this setup beyond the very early stage
I think this article misses two important points on siloed qa teams
1. Qa doesn’t know what can be covered by unit or integration tests
2. Since they treat our code like a black box, they may create permutations of tests which which cover the same functionality
Maybe this is part of the draw of having a qa team. Feature coverage rather than code coverage. The downside is this can create a huge number of expensive to run manual tests which may be hitting the same code paths in functionally identical ways.
The tooling for automating manual tests of web apps is almost there: puppeteer, recording user inputs and network calls, replaying everything and diffing screenshots.
Since qa tests are tied to features and not code, There’s also the problem of having to run all qa tests even if you’re releasing minor code changes. My build tools are smart enough to return cached results for unit tests whose dependencies didn’t change, but there’s no equivalent for qa tests.
Yeah, this article is shallow and avoiding the deficiencies inherent to Rainforest's offering. They are defining QA challenges as a nail so they can sell you their hammer.
Dev sometimes aren't the best QA because they think like developers and not like end users. The mindset prevents you from doing things on the corners that end users do. Its like your instincts kick and a keep you safe without the railing where an end user might plow ahead thinking their path is ok and fall off the edge.
Devs should do automated unit and functional tests, but after that, get some good QA that do not have the same boss (at least at the first level) as the developers.
This can absolutely be learned, enabled, and encouraged by the right guidance and, if necessary, training at the dev-team level. It's just part of the job.
Can be, yes. Devs aren't always the best group to do that with though.
I've had quite a bit of user training in a product I work on, but unsurprisingly the QA group of people who in their primary roles use it daily and have degrees in the problem domain are way more effective at finding non-obvious problems.
I really don't think any Dev team will be as good at QA as a dedicated QA team. The mindset is very different and the context switch is hard. Checking your own work is also not the greatest of ideas.
I've been on teams where the "siloed" QA model seemed to work pretty well -- we seemed to find a decent balance between test coverage and frequency of releases.
But this was at a cash-rich startup that had lots of money to spend on making sure we had plenty of QA headcount. That seems to be the exception, rather than the rule. Lots of startups I talk to are quite constrained in terms of dedicated QA, so the argument for empowering product managers to own quality does make some sense to me.
The last place I worked would pull devs randomly to do QA tasks when QA got behind. Of course, since we weren't trained in the QA processes, and didn't do it all the time, it often wound up taking longer to do the testing, plus the devs would get behind in their tasks.
It's also common practice; usually caused by testing getting under-resourced, bad tooling, or longer shipping cycles (more stuff to test, more pressure to get it out).
The complexity of our application necessitates that our business owners do a lot of the final integration testing.
It also requires that our product owners handle a large part of the implementation & QA efforts.
For our application, a single piece of code might be reused 40 different ways across 100 different customers based on configuration-time items.
To ask a developer to figure out why something is broken would be a fool's errand for us. Only with the aid of trace files & configuration references are we able to perform RCA and clean up an issue from live usage. For us, 99% of the time it is a configuration thing (either on our end or some 3rd party system). If the code works for 99 clients and 39 other configurations, it might be your client+config that is wrong, not the code.
Essentially yes. We are trying to move to a hybrid model where we can send excel workbooks to the customer for completion, and then directly import these into the system. This would get us ~80% of the way there.
One huge upside with configuration is that it can be copied really easily. If you have a model customer that is very similar to others, you can use it as a starting point and save 99% of the work effort.
Makes sense. Did y'all try to automate any of it? I've seen situations where things are highly-configurable, and massive at the same time - mostly in the medical industry. Test plans are hard due to interactions between config being highly-complex. No sense testing all of the setups, as not all are used or would even make sense.
The right person for doing QA for code written by developer A is developer B, who is motivated to show that there is breakage in the code written by A. This is just good old peer review, in the sphere of development.
The industry uses dedicated QA people based on the assumption that you can get two for the price of a developer.
In fact, if you have two developers, one of whom is more clever than the other, you want to give the code cranking to the less clever one, and use the clever one to verify the code cranking and make improvement suggestions.
If you have clever people designing the system, and clever people finding problems in the commits, you can just have average coders cranking out the bulk of it. Most code doesn't need cleverness, just persistence and consistency.
My org has mixed teams. We find devs are terrible at being QAs. This is not through lack of willingness but they're always either overtesting or undertesting and usually missing the test boundaries. So we use the SDET model, software developers trained in testing to build out the automation and high quality QAs with lots of domain knowledge to do the static and exploratory testing.
In my experience (hiring manager for QAs), there's no way I can get SDETs for anything like the price of a journeyman dev. I'm probably looking at a 20-40% premium right now for someone decent.
I agree with not asking the developers of the feature. I don't think it should be part of product. QA is a deeply technical job. At GitLab QA gets own department called Quality. They are on the same level as development, design, development, and infrastructure. The people in it are mostly Software engineers in test. For more information please see https://about.gitlab.com/handbook/engineering/quality/
Some of QA is technical needlessly; we think that's a common issue with tooling today. QA today is just not accessible to folks that think about and manage change to the product, namely product folks.
Why don't you think it should be part of product?
Love that your department is called "Quality", implies more metrics-less-feeling before reading the doc.
> folks that think about and manage change to the product, namely product folks
It's weird to me that the software engineers building the product are not viewed as "product folk".
The people designing the architecture for and building the actual product aren't managing the changes to it or concerned about it? The people who have to address any defects in it aren't concerned about their processes as they pertain to quality?
This sort of worldview seems anti-agile and anti-lean TBH.
Engineering needs to own and focus on quality. It's construction that determines quality. It's construction that addresses defects. It's engineering that owns construction.
If engineers aren't owning and focused on quality the business either doesn't really believe there are quality issues or there is a huge alignment issue.
Thanks Russell. In our experience a lot of QA improvements require automation which is more of a software engineering job than a product management one.
I think this is mostly true today, but something which we think of as broken and we're fixing. We believe it should be accessible to a wider audience outside of just engineering; there is a lot to do, but we're starting with UI automation.
Being part of a medium-size company I realise the reason to not have QA is to save money. When an outage occurs because "oops, we didn't catch that bug", the issue is fixed within minutes. I suspect this is fine for the company and I imagine that the cost of these outages is cheaper than the cost of dedicated QA teams. Besides, this "culture" of "developers should own their products from conception to deployment, including QA" is becoming more ingrained in all new companies. It almost seems as if developers should be in charge of almost everything... and certainly it does cut costs (at least in Europe where we don't have such salaries as in the USA):
developers should know about frontend, backend, databases, now the cloud, devops, developers should have a "product mind", they know monitoring, they should do paid on-call rotations, they should do QA, they should help in technical interviews, they should mentor juniors colleagues, they should do "dojos" and "katas", they are encourage to go to conferences and "share knowledge" when they come back, they should help onboarding of new colleagues, they should organize "tech talks", they should... and all with the excuse "we care about you growing as a professional".
If you were running a business, and wanted someone to do tech talks, or design/run interviewing for tech candidates, or have mentoring for junior devs, would you prefer non-developers to do it?
Looks like a pretty cool product. To play devil's advocate I'd like to poke holes at two things:
Product team caring about quality: Devs need to balance quality with speed, and feel appropriately ashamed when something breaks on prod. To the extent that devs optimise for speed, I don't see why product would be any different. The product team has a large backlog and many important deals blocked by certain features. The incentive to optimise for speed is just as strong. The tension you get between a dedicated QA team and a dev team arises precisely because the QA team cares _only_ about quality. So by moving the responsibility to product you'll either see more corner cutting due to product optimising for speed, or more tension due to product optimising for quality. I don't think you can have your cake and eat it too.
Feasibility of no-code testing: having your browser interact with the page in the way you would like is a good fit for a no-code approach. But most of the effort in writing tests, I've found, is setting up the data (e.g. with factories or fixtures). I'm not so sure that you can no-code that side of QA as easily. If I'm right about that, it means product will end up dependent on devs to write the tests.
Regarding the incentive structure, I think you're right - there's no way to eliminate the friction that comes from competing incentives. Our experience has been that empowering the product organization to make that tradeoff themselves leads to the optimal outcome for the business. The goal isn't to eliminate the tradeoff between speed and quality, but surface it, and put it in the hands of the people who are the business decision makers, which tends to be product.
Re test data - we had seen this bottleneck with our previous product, which was purely about crowd testing. What we've seen since we shipped no code automation is that much of the data seeding by less mature teams can be done through the tests themselves. This is suboptimal, but with automation so cheap and fast, it works. Then over time the engineering team can seed the states that are most often created through the tests.
* It eventually becomes too much work for the PM as the product grows or the PM just gets tired of the tediousness of creating tests for all the edge cases
* PM hires someone to help with creating the automation tests
I agree with the premises, but not necessarily the conclusion. Any team with an attitude of “that’s not my job” is fundamentally broken.
The team is responsible for product, development, quality, ops, CI, etc. But you have to adjust your definition of “team”.
I very much agree with Google’s SRE model, and implementing it has worked well for us. The dev team is not an island unto itself, but rather a piece of a larger whole. SREs, Quality, Ops, and others are also true software engineers, and we tear down the silos between teams by working across our imaginary lines. Because we all rise and fall together.
QA isn’t responsible for writing all of the tests, but they work to build a platform that makes it easier for anyone to write and run tests. Ops isn’t responsible for managing all of the infrastructure, but they work to build a platform that makes it easier for anyone to deploy services quickly and with great success. Security doesn’t stand in the corner telling what to do, but they work together to build tools and systems that can help an app team discover and remediate security issues.
The solution is not no-code anything. The solution is software engineering and building platforms to reduce the “toil”, build repeatable automated processes, and work together and listen to each other to make sure we ship high-quality services for our customers.
Full-cycle development. Similar to the concept of “cross-training” in your first job in fast food. It’s like “full-stack development”, but encompasses everything a service needs to succeed and not be a pile of shit.
I guess this debate entirely depends on the product and the customer base. If you make a product like Excel or Photoshop you probably need enough people to have taken an independent look before releasing it to the customers. QA is a legit formal step and protects the reputation, time and money for the organization. Smart QA teams will automate their processes and I have seen some pretty sophisticated automated test suites. In such a setup fluent communication between dev and QA is probably critical.
On the flip side there are organizations which do not have the burden of fickle customers. An example of such an institution is a high frequency trading firm. The firm that I work for has two roles, developer or trader. These companies hire exclusively on-campus and with very transferrable skill-sets between the two roles. In such an environment, where both the writer of the application and the user is equally competent, the role of QA is a burden, there isn't much to add. The trader proposes an idea, developer implements it, trader has all the incentive to make the idea work and the two basically figure out ways to get a feature in prod as fast as possible and ensure that you make money. The success is tangible, the money is directly linked to the bonus you make. I see similar parallel in a startup where the people who generate the idea and people who implement the idea are the same pool. In such empowering environments, the proximity of user and developer just pushes for excellence.
Most of the time the advocates and critics of QA do not realize that it is all context. QA as a role saves embarrassment, money, time etc. but that does not mean an engineering team is incomplete without a QA.
I do feel that this article is offensive to QA community when it portrays QAs as inferior members of team doing work that is not worth the `expensive` developer.
From a couple different perspectives, docs and support also love (or should love) dedicated QA as a last line of defense against product changes that engineers don't consider worth documenting.
Every single release in an org with no dedicated QA and dev ownership of testing, I've seen something a dev has changed in the product that developers deemed weren't worth documenting, and every single release a user hits that thing and support is left not even knowing the change happened and looking like they aren't the product experts that sales told the user they were.
But aside from the dedicated viewpoint change of a QA analyst vs. an engineer, it's also a different relationship with other teams. Whether QA is part of engineering teams or siloed, it's usually a little easier for a tech writer or support engineer to pitch process improvements and regular communication to QA than developers. It's not about capacity, but value — adding a test to the product that confirms a code example in the documentation is accurate helps the product and the writer, so someone with a QA-first mindset doesn't have to think twice. But to a developer with a velocity-first mindset, it's busy work (shouldn't the tech writer own that check?) and maintenance load (isn't this another low-value test I have to manually update every release?).
Tests should be a form of docs for what's supported in the product. If a feature isn't tested, documenting it for users is a risk. If the tests don't confirm what's documented, it's a risk. If it's a risk, it's not the devs who'll deal with it first when it breaks, it's support. Having that healthy relationship between documentation, support, and testing is a force multiplier, and it's harder to build that relationship when development priorities like velocity conflict with testing priorities like coverage and accuracy.
I recently finished working as a team lead in a small startup company in a foreign land where I do not speak the language fluently. The entire team aside from one member were unable to speak or write in English.
We had no "QA" team. The director of engineering expected us to do our own QA. He expected me to make sure it happened. This project consisted of breaking up an existing system into three parts: a backend and two separate frontends. New technology was involved and parity + additional new features were expected. The team was small. Half of the six engineers were not performing as expected for a variety of reasons.
This thing broke me.
In order to try to attain what he was asking for evidence was required. Screenshots, gifs, explanations in PRs.
I was removed from the position of team manager just two months before release and replaced by a native speaker.
Yes I am bringing my weird personal anecdote into the conversation, but not having people who were specifically there to do testing was soul crushing.
There's a really important issue with QA which is that you can't just farm it off to a team that doesn't know how to use the product. In my last job, which was working on Engineering software, they had a few student interns and a couple of permanent staff that had no training as engineers. So when bugs came in from customers, they had no ability to try and work out what the issue was, it was really dysfunctional - they weren't able to say "this is because the custoemr doesn't understand the physics" for e.g.
They eventually moved an application engineer from pre-sales into the team and it made a huge difference in triaging this stuff, because many 'bugs' that came in were things like "my model isn't converging" and the answer was 75% of the time things like "you need to refine your mesh further", whereas previously it got shunted onto the dev team.
This feels a lot like a reincarnation of cucumber/gherkin, except they've replaced business-facing text DSL with a no-code visual UI. The intention is the same - to have the customer own the tests.
This looks like it has a shallower learning curve to get started, but I would imagine that after a certain point it winds up being less productive to use the UI than to write code and this is ultimately why visual coding hasn't taken off. At some point someone will need to become a power user and be their organization's resident expert on Rainforest, but at that point they're spending all of their time in Rainforest and they're no longer a product owner embedded in the business.
At the end of the day the business owner should be writing specifications, the engineers should be using them to create tests and they each should be collaborating closely on both.
If you want something it's closer to, it's sikuli script. Visually looking at the page (or whatever, tbh), then manipulating it using the keyboard and mouse. It's basically done using a KVM, so much closer to what a user would be able to see and do than something like cucumber or gherkin.
However, we also allow you to test using a crowd of humans, should folks need more nuanced feedback about things, or have much more complex things to ask.
Disagree on who should be writing tests; I think that's the case today as tooling doesn't support anyone but engineers (QA or not) automation things, or manual tests.
Engineers should absolutely be writing unit tests and integration tests for their APIs. Personally I find that it brings a lot more integrity and a sense of ownership into the process when engineers are required to deliver tests.
I disagree with the problem's that you mention in this article:
Developers aren’t incentivized to prioritize QA testing
Developers are typically evaluated based on the quantity of software they ship, and how fast they ship it.
That's an organizational problem that is not universal and certainly won't be solved by a QA automation tool.
Developers’ job satisfaction goes down when they’re in charge of QA
Expanding upon one of the previous points:, we’ve seen that most developers just don’t enjoy doing QA.
This might be the case for manual testing but for automated testing the opposite is true. Delivering tests along with code increases integrity, makes debugging significantly easier, and helps clearly communicate the intention of each feature. This only works if there is collaboration between the developers and the product owner.
Clearly you have a viable product that works for many organizations, but it's certainly not a one-size-fits-all solution nor a best practice.
Thanks. I agree re unit-tests and integration tests. Mostly we're focused on (and talking about) testing what humans end up using directly - e.g. interfaces to web apps or mobile apps.
I think you're wrong re:organizational problems, it's part of it - but most developers (in my expereince) do not want to do QA outside of unit-tests and maybe integration tests. They want to write code, and ship things. Automation, at least traditionally is brittle as well as slow to write, and few love it. Whilst tooling like Cypress does improve things over Selenium, still, I've not met a developer that actually enjoys that kind of testing.
I've had a couple goes at teams trying to roll out cucumber tests, and I still don't understand quite what the point is.
Nobody but developers could actually manage to write any tests, and it was harder than just using the normal tools, plus maintaining all the glue besides.
The gherkin language that cucumber uses for its specifications, written well, are brilliant at distilling what the customer wants, what the developer is going to build and how the QA is going to assert that acceptance is measured. But it has to be written collaboratively by all of those together and by the end of it you get a shared understanding. This is the most important part.
The glue code behind it is a different fish, and needs someone with software engineering trainging to build it and love it.
Tangentially, over my lifetime experience, I understand that QA Testing should be done on public (user) interfaces.
Intermediate data marshalling or transport, eg kafka delivering data from one application to another, is a dev ticket but has no QA component. Just as QA doesn't look at the individual data structures and method access modifiers, there are technical tasks and processes that QA has no stake in, nor can they do anything but complicate a delicate system with additional instrumentation/hooks. Generally, developers will want instrumentation, which is a public interface that QA can test, but is not mandated by QA as part of design.
Mostly the issue with developers doing their own QA is the time is never allocated to do it properly. They are expected to design, dev, test, release, participate in improving team processes every week. Testing almost always gets cut due to the constraint of reality. Barely enough time is allocated to dev itself. Everyone talks about the advantages of devs doing their own QA, but what it means is that they just cut QA out altogether.
We have a team that helps with testing infrastructure but the tests are created by developers on the functional team. Everything is automated. Seems to work well.
Does a project needs a QA person?
It depends. I had projects where QA did offer negative productivity and projects where we spends weeks because we didn't have QA.
I believe the problem is that we do not have good QA professional. Being QA is more the applying a process is understating the projects needs, UX and the process, and being able to delivery on all fronts.
We've suffered from the same problems as highlighted here. What helped us was a low code solution, https://smashtest.io, which is basically an English wrapper over Selenium. The developers spend less time on tests, and the QAs aren't an afterthought.
This; especially now more apps are being done with no-code tooling (bubble, anyone?) - not a lot of existing tooling work with it, and code-based testing isn't viable even if it did.
In one of my previous companies, Our workload as developers went significantly down, code quality went up and bug complaints went down after we hired a QA team, as opposed to the developers doing it themselves.
It is important for code to be looked at and tested with a fresh perspective
Am I reading the pricing correctly? 135$ for running 30 specs?
We have Cypress tests that run automatically for every single git push which are running throughout the day constantly... if you did a price comparison we'd be saving something like $140k per month by not having a pretty UI for building tests
I think QA-specific people that are specifically not SDETs create bad habits on teams. I think asking developers and product owners/managers to "own" QA fixes the "just throw it over the wall and hope for the best" problem.
But then you're settling for low quality testing. There's an excellent methodology a good test team uses on devs that throw it over the wall. They throw it back with the first bug report.
That said, it's sometimes the right thing for the team when something gets thrown over raw. A good test team should be fine with that and play their part.
> But then you're settling for low quality testing.
Not necessarily though. I would also argue that manual testing is just an insurmountable timesink. You'll never have enough time because manual testing balloons to the time allotted.
> There's an excellent methodology a good test team uses on devs that throw it over the wall. They throw it back with the first bug report.
Sure, but continuing a cycle of "not my problem" helps no one and wastes time. Removing QA as a team/role/specialization, and instead making it a step in the process a dev goes through to ship software, you're fixing the broken feedback cycle QA teams create.
Code review. Read the results of someone thinking through a process. Spot more than they will, simply by throwing more eyes at it. Actually fairly effective: getting a senior dev to cast even a lazy eye over everything gives more opportunities to discuss Why It's Done This Way and Why We Don't Do That and Why This Framework Sucks And How To Deal With It with specific concrete examples which the other dev is currently thinking about. But it's still easier to write the code yourself than review it, and things still get missed no matter how careful you try to be, so it's still just another layer.
Unit tests. They cover the stuff we think to check and actually encountered in the past (ie. regressions). Great for testing abstractions, not so great for testing features, since the latter typically rely on vast amounts of apparently-unrelated code.
Integration tests. Better for testing features than specific abstractions, and often the simplest ones will dredge up things when you update a library five years later and some subtle behaviour changed. Slow sanity checks fit here.
UI-first automation (inc. Selenium, etc). Code or no-code, it's glitchy as hell for any codebase not originally designed to support it; tends to get thrown out because tests which cry wolf every other day are worse than useless. Careful application to a few basics can smoke-test situations which otherwise pass internal application sanity checks, and systems built from the start to use it can benefit a lot.
Manual testing. Boring, mechanical, but the test plans require less active fiddling/maintenance because a link changed to a button or something. Best for exploratory find-new-edge-cases, but throwing a bunch of students at a huge test plan can sometimes deliver massive value for money/coffee/ramen. Humans can tell us when the instructions are 'slightly off' and carry on regardless, distinguishing the actual important breakage from a trivial 2px layout adjustment or a CSS classname change.
So that's the linear view. Let's go meta, and combine techniques for mutual reinforcement.
Code review benefits from local relevance and is hampered by action at a distance. Write static analysers which enforce relevant semantics sharing a lexical scope, ie. if two things are supposed to happen together ensure that they happen in the same function (at the same level of abstraction). Encourage relevant details to share not just a file diff, but a chunk. Kill dynamic scoping with fire.
Unit and Integration tests can be generated. Given a set of functions or types, ensure that they all fit some specific pattern. This is more powerful than leveraging the type system to enforce that pattern, because when one example needs to diverge you can just add a (commented) exception to the generative test instead of rearchitecting lots of code, ie. you can easily separate sharing behaviour from sharing code. Write tests which cover code not yet written, and force exceptions to a rule to be explicitly listed.
UI testing is rather hard to amplify because you need to reliably control that UI in abstractable ways, and make it easy to combine those operations. I honestly have no idea how to do this in any sane way for any codebase not constructed to enable it. If you're working on greenfield stuff, congratulations; some of us are working on stuff that's been ported forwards decade by decade... Actual practical solutions welcome!
That's my best shot at a 2D (triangular?) view: automated tests can enforce rules which simplify code review, etc. The goal is always to force errors up the page: find them as early as possible as cheaply as possible and as reliably as possible.
The machine can't check complex things without either missing stuff or crying wolf, but it can rigidly enforce simple rules which let humans spot the outliers more easily.
And it is amazing how reliable a system can become just by killing, mashing and burning all that low-hanging error-fruit.
Code reviews should be about project structure and abstractions and keeping approach in order or to use team common approach instead of each team member doing whatever, well syntax/code should be linted and formatted automatically nothing for reviewer. Second thing is checking by second pair of eyes if they understand code in question in the same way.
Unit and integration tests should not be generated. Those should be written by people if they find code that they are writing doing complex things like some specific calculation. It is more as a tool for understanding what you are doing and then maybe leave some tests behind for regression. But don't generate BS tests that will only slow down system and people. People have to understand what is going on and be on top of it and never "just run the tests" because tests that are passing green but are actually wrong are really bad.
UI testing should not be abstractable - it should be only augmenting manual UI testing - so tester should be automating his own work after he has done it manually. That tester should also find things that take him long time or have to be done multiple times and are not changing often so he wins time to do more important things. QA person should also be always engaged with the system and automation because that is the only way you can keep domain knowledge.
It depends how you count the unit/integration tests, really.
If you've got a general rule which must apply across an entire system, generate the necessary tests so that they fail granularly and don't require messing around to find the exact case which breaks. IMO that's one test, just applied to a range of cases.
An example might be mappings for Entity Framework (or similar ORMs, etc). Auto-generated migrations simply do not work if you need to limit migration downtime and maintain certain data invariants (which can't be specified in the schema, and yes, those always exist). So you need to write database migrations manually. This introduces risk of desync of entity mappings and schema.
So don't just spot-test roundtripping entities (a nontrivial system will have hundreds and something always gets missed). Instead, write a tool to introspect the DB schema and the Framework's mappings, and check that they match sufficiently closely. Every time someone adds an entity or property or something, it's already covered.
Similar cases exist when dealing with any interface between separate systems, especially when you don't control one of them. If you're regularly mapping between two models, use something like Automapper which can be asked to verify its mappings to check that every property is handled in some way.
(Granted, Automapper doesn't catch everything, but it builds a model that could probably be introspected over to spot encountered bugs and check that other possible examples of those bugs don't exist. Doing so generatively catches future additions of possible cases for free. If you're really paranoid, define some means of marking manually-written tests which cover each case, and test that a test exists for each case.)
Computers are really good at force-multiplication. They should trivially be capable of spotting other instances of known categories of bug. This is not hard to do and doesn't require wooly nebulous machine-learning shit: we've had introspectable ASTs since the dawn of compilers.
I can't find the bit where he talks explicitly about this as I've lost access to the book I read it in (and don't even remember the name). I'll see if I can dig it out.
In the passage I am unable to find, he gets into the issue of having a worker making widgets, and another worker checking the widgets. In our world this would be a developer and a test author. He states that this reduces quality: the maker knows that he can make poor parts when he is in a hurry, because the tester will catch them; the tester knows the worker personally, and trusts that he will make good parts, so doesn't have to be very thorough. Instead, we need the worker making the widget to be responsible for its quality, and further to be empowered to do so.
It's hard to get them to provide details to the engineers. I haven't seen many who write user stories. Asking them to build no-code automated test cases, that's ... ambitious.
Maybe their tool helps? but these sorts of self-justifying articles are just untrustworthy marketing babble. Big turnoff to me. I stopped reading halfway through, do they ever mention the pros of alternatives or is this purely arguing "here is why our company is the only good solution"?
Handing off QA to an external team is broken because those people don't have the necessary experience with the product, nor can they quickly and easily engage with development to get to the heart of a problem (and a fix.)
Having QA rely exclusively on automation brings quality down as the application's complexity goes up. Writing tests to cover every possible edge case before the product ships isn't feasible.
The best solution I've seen in the decades I've been in software development has been to have QA and Dev as part of a team. Automation covers as much of the product as possible, and continually grows as issues are identified that can be covered by CI from here on out. QA become essential at the boundary of new features, and require a deep understanding of the product in order to know what knobs to turn that might break the application in ways the developer never thought about. Once those sharp edges are also automated, QA pushes the boundary forward.
[1]: https://twitter.com/brenankeller/status/1068615953989087232