Hacker News new | past | comments | ask | show | jobs | submit login

The code review feature seems too expensive to run on every PR automatically (to me): $0.75 per 100 lines of code. From their example pricing: "if you have a typical pull request with 500 lines of code, it would only cost $3.75 to run CodeGuru Reviewer on it." I wonder if it's actually good enough to justify that price.



If it’s trained on software written by Amazon it’s probably worth the $3.75 just so you can do the exact opposite of what they recommend.


I don’t have much context, but I’ve never seen Amazon as a technical leader in the industry. They’re absolutely a business leader, and the services they provide can be good, but at a code level I’ve always thought of them as very MVP, if it works it’s good enough.

For code review services I’d expect a level far above this. Maybe they are able to do that, but I don’t have any existing positive bias towards this, and a few things against it.


It'll probably generate irrelevant stats on the engineers to send directly to their managers to use against them in their next review.


You have no idea how much this resonated ^^

Just needded to add an AWS library in my code base and BAM! here is how my console will look on every reload from now on :

:8081/index.bundle?platform=ios&dev=true&minify=false:93 Require cycle: node_modules/aws-sdk/lib/react-native-loader.js -> node_modules/aws-sdk/lib/credentials/temporary_credentials.js -> node_modules/aws-sdk/clients/sts.js -> node_modules/aws-sdk/lib/react-native-loader.js

Require cycles are allowed, but can result in uninitialized values. Consider refactoring to remove the need for a cycle. metroRequire @ :8081/index.bundle?platform=ios&dev=true&minify=false:93 :8081/index.bundle?platform=ios&dev=true&minify=false:93 Require cycle: node_modules/aws-sdk/lib/react-native-loader.js -> node_modules/aws-sdk/lib/credentials/cognito_identity_credentials.js -> node_modules/aws-sdk/clients/cognitoidentity.js -> node_modules/aws-sdk/lib/react-native-loader.js

Require cycles are allowed, but can result in uninitialized values. Consider refactoring to remove the need for a cycle. metroRequire @ :8081/index.bundle?platform=ios&dev=true&minify=false:93 :8081/index.bundle?platform=ios&dev=true&minify=false:28851 Warning: AsyncStorage has been extracted from react-native core and will be removed in a future release. It can now be installed and imported from '@react-native-community/async-storage' instead of 'react-native'. See https://github.com/react-native-community/react-native-async... reactConsoleErrorHandler @ :8081/index.bundle?platform=ios&dev=true&minify=false:28851 :8081/index.bundle?platform=ios&dev=true&minify=false:93 Require cycle: node_modules/@aws-amplify/analytics/lib/Providers/index.js -> node_modules/@aws-amplify/analytics/lib/Providers/AWSKinesisFirehoseProvider.js -> node_modules/@aws-amplify/analytics/lib/Providers/index.js

Require cycles are allowed, but can result in uninitialized values. Consider refactoring to remove the need for a cycle. metroRequire @ :8081/index.bundle?platform=ios&dev=true&minify=false:93 :8081/index.bundle?platform=ios&dev=true&minify=false:93 Require cycle: node_modules/@aws-amplify/predictions/lib/types/Providers/AbstractConvertPredictionsProvider.js -> node_modules/@aws-amplify/predictions/lib/types/Providers/index.js -> node_modules/@aws-amplify/predictions/lib/types/Providers/AbstractConvertPredictionsProvider.js

Require cycles are allowed, but can result in uninitialized values. Consider refactoring to remove the need for a cycle. metroRequire @ :8081/index.bundle?platform=ios&dev=true&minify=false:93 :8081/index.bundle?platform=ios&dev=true&minify=false:93 Require cycle: node_modules/@aws-amplify/predictions/lib/types/Providers/index.js -> node_modules/@aws-amplify/predictions/lib/types/Providers/AbstractIdentifyPredictionsProvider.js -> node_modules/@aws-amplify/predictions/lib/types/Providers/index.js

Require cycles are allowed, but can result in uninitialized values. Consider refactoring to remove the need for a cycle. metroRequire @ :8081/index.bundle?platform=ios&dev=true&minify=false:93 :8081/index.bundle?platform=ios&dev=true&minify=false:93 Require cycle: node_modules/@aws-amplify/predictions/lib/types/Providers/index.js -> node_modules/@aws-amplify/predictions/lib/types/Providers/AbstractInterpretPredictionsProvider.js -> node_modules/@aws-amplify/predictions/lib/types/Providers/index.js

Require cycles are allowed, but can result in uninitialized values. Consider refactoring to remove the need for a cycle. metroRequire @ :8081/index.bundle?platform=ios&dev=true&minify=false:93 :8081/index.bundle?platform=ios&dev=true&minify=false:93 Require cycle: node_modules/@aws-amplify/predictions/lib/Providers/index.js -> node_modules/@aws-amplify/predictions/lib/Providers/AmazonAIPredictionsProvider.js -> node_modules/@aws-amplify/predictions/lib/Providers/index.js


[flagged]


Definitely no bias there ;P


That’s incredibly cheap, assuming it provides good suggestions. How much time does it take you to review 500 lines of code change, and what’s your time worth? If it takes 10 minutes and your time is worth about $20/hour or more, this service will part for itself immediately.


Our code reviews are far more "is this the right way to solve the problem?" than "hey, you never use that variable you declared." The latter would be picked up by our linter; I'm having a hard time seeing the value proposition here.

>It’s like having a distinguished engineer on call, 24x7

I don't believe that, regardless of how many times they sprinkle in the words "machine" and "learning".


I'd be very surprised if the service they've announced is a linter.

The announcement says it can even analyze parts your code that are more computationally expensive than they need to be. I'm not sure I understand the skepticism--surely they have among the largest code repositories in the world. Why couldn't they train models on it to look at best practices and even compare code practices to different metrics.


Isn't this just calculating the cyclomatic complexity?


No - OP meant computationally expensive, not cognitively expensive. Two nested for-loops can be O(nˆ2) but can have a cyclomatic complexity as low as 1.


It may not be as good as a human, but I highly doubt it's a linter either. It's somewhere in between. If their claims are true, and it has been trained on hundreds of thousands of their own reviews, the AI could have picked up common patterns that are beyond lint but still real mistake a good reviewer would spot.


I agree 100%: if it provides good enough suggestions, it could pay for itself pretty easily on regular day-to-day PRs. (Although: not all 500 line PRs are made equal.)

My original comment was definitely unclear. I actually had two separate thoughts (that I didn't communicate well at all):

(1) if your team has occasional large, automated PRs (code generation, automated refactors, etc), you probably don't want to run this tool on them because of cost, so anyone that has these large PRs and uses CodeGuru probably needs to build a way into their automation to suppress CodeGuru (or build a way to invoke it for specific PRs)

(2) I also wonder if it's good enough to justify the price on regular PRs

We don't have many situation (1) PRs where I work now, but they do come up occasionally. For example, I've used IntelliJ IDEA's structural find-and-replace to do very large automated refactors where CodeGuru would be very expensive and probably provide little value. We also do check in some generated code (we usually don't do this, but there are a couple exceptions where we weighed the tradeoffs and decided checking in the generated code was a better solution, in our eyes).


Only if CodeGuru gets a lot of the value of a code review. But I think finding actual bugs is a pretty small of it.

A good code base is a team-created intellectual work. For that to happen, you need a ton of collaboration, shared learning, evolution of norms, interpersonal bonding, and practice of key social behaviors (e.g., principled negotiation, giving good feedback, recognizing and rewarding good actions). Automated code review gets at none of that.


You seem to assume that the code review tool can do everything that a human code reviewer can.


On the other hand, a machine will not get tired or bored where a human most definitely will if the diff is anywhere near that 500 lines


I disagree. I regularly review PRs with more than 500 changed lines/20+ changed files. I read every single line. I put the same amount of effort into reviewing code as I do writing it; every software engineer should.


You don't replace the human code review. You supplement it. An AI that can replace the code review would need to be an AGI that "sat" down with your team and understood the architectures and meaning in your code. If it can't say "WTF?" it can't do a full code review.


Although this could certainly supplement a human reviewer, I don't think it could replace one.


You really believe this will free you from traditional code reviews? I would treat it as an advanced linter, and from the pictures looks like that's how it integrates itself in github.


Yeah or looking at it the other way around. If this would replace your human reviewer, then maybe you would do good to have some serious discussions with your HR department...


Don't accidentally commit `node_modules` - that'd be a costly mistake!


Or a 35,000 line XML configuration file.


Now that I think of it, if it's paid by lines of code, it perversely incentives people to minimize the lines of code, no? Does it count white space and comments? Can I minify my code before passing it to this, then unminify it?


Many languages can have code written in them minimized down to a single line. I guess they must have a character count number equals a line qualifier somewhere.


But even then, still pushes people to shorten variable names and other kind of minification.


Can you just remove newlines from files? In most languages they're optional.


Count semicolons then.


If it is as smart as some people here think it would probably output: "are you f#$king with me?"


Maybe it’s on recommended line formatting


“Time to hire some code golfers, we’ll put the whole app into one line of code.”


"CodeGuru says this should be separated into more lines..."


Really though, why charge by-the-line on this kind of product? Imagine if CodeCommit or Lambda charged you by the line too!


Nightmares in perl...


That sounds pretty terrible to be honest. I cannot imagine getting that kind of value out of it (that I would not get with a simple linter).


I was thinking about the same thing. Even for a small project of 3 developers, it seems like this would rise easily to the $100+/month, for suggestions that may not even be that useful.


at which point you'd stop paying for it I would imagine?


There doesn't seem to be a point to start paying for it, and the time to stop paying for it seems mighty early


PR I sent yesterday, line changes: +2,703 −3,529. That's like 50 bucks just for that PR.


If another developer reviewed your code, how much time it would it take them, and how much is that time worth?

If you divide the 50 bucks by that number, you get a cost ratio. If it's lower than the ratio of (benefit expected by automatic code review) / (benefit expected by manual code review), it's worth using.

I guess we can speculate all we want; in the end, only experience will show if the service is worth it or not.


dude I hope that was the result of updating yarn.lock or similar, otherwise good luck to your reviewer!


Nope, it's mostly code. It's a gnarly PR that atomically delivers a feature (and removes the feature that the new one replaces), but this looks about right for my weekly productivity overall, except I usually submit it in smaller PRs, and the delta is mostly lines added.


Trying to compare it to another code analyzer... https://sonarcloud.io/about/pricing 100k lines for €10/mo.


Is that all code or just the diff, though? 100k lines of code in diffs seems like a lot, all code - not so much.


Or when a junior dev switches from tabs to spaces


Or when a junior dev switches from spaces to tabs ;)


heh.. I wish it were only junior devs.


that's insanely expensive if you're doing any type of code generation.


This seems like a good incentive not to be generating thousands of lines of codes with each PR, which most would probably consider a feature as opposed to a bug.


If you split the same number of lines over two, three, ten, etc PRs it still costs the same. If anything it is incentivising code-golf via line minimization.


exactly. IMO if you're generating code, it should happen at build/compile time not at checkin


I'm not a fan of this. My team does this a lot and invariably it leads to having stuff be unfindable because the source you really want is in your build directory and not in code search. Which is annoying but manageable if I build the project, but partners who don't will have an even harder time of things.

More recent efforts have us check in generated code alongside the "config" files, and automated processes ensure you check in the generated code if you touch the config file. It's much better this way.


We generate source code into a "src/main/generated" directory. IntelliJ picks it up like any other source code with proper Gradle configuration and we ignore it with .gitignore.

Only downside we've found is it can be a pain with searching for references in Github and you have to remember to generate the code, but for the most part it is seamless.

I guess how you manage it depends on your IDE, if you can configure it to work nicely, and how much the generated source changes/needs to be read.


One exception case in our project is documentation- parts of it, like the index, are generated prior to commit. We like having the docs updated in the same commit as the change so there's never an opportunity for mismatch.


What would the rationale be for that?


Generated code is an artifact of the source code. If you need it for something specific, regenerate it from the source when you pull that from your version control system. You're not getting any benefit by storing something that can be generated alongside the means to generate it.


Thank you for your response. The advantages you get:

* Hermetic builds are faster because code-gen only occurs when changes occur in the base code

* Lots of docgen tools don't support incremental compilation

* Diffs in generated code show up as diffs when you change the code-gen tool, easier to isolate changes that occur if your code-gen tool is upstream (say you want entire org on Thrift 0.9.2 from Thrift 0.8)

Downsides I can see:

* Large repo.

* Source of truth is now the generated code, not the source, so someone else using the source could get a different result.

Essentially acting in an empirical mode of operation (i.e. does it provide benefits for cost), and ignoring any philosophical objections, this seems like it could go either way depending on the situation.


> Source of truth is now the generated code, not the source, so someone else using the source could get a different result.

This seems to apply to the other side, I think. Generating from source with different tooling or tool versions could create different results, whereas using the generated code guarantees consistent behavior.


IMO it’s because generates code is not “source code”. It’s more similar to object files—both are generated by running a compiler.


Tools like this should be built into your IDE. No developer ever wants automated feedback at the end of the process in a code review.

There are lots of academic ML review/suggestion tools. Those people come to the table with trials and statistics to assess the quality of their results. Amazon probably copied one of those papers, added a rules-engine to recommend their own APIs, and slapped a hefty price tag on it.


> I wonder if it's actually good enough to justify that price.

If it can spot a lot of issues (performance, security, bug, etc), $3.75 is definitely a good deal to do it once a while but not on every single changes (e.g. fixing a typo in the code comment)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: