The code review feature seems too expensive to run on every PR automatically (to me): $0.75 per 100 lines of code. From their example pricing: "if you have a typical pull request with 500 lines of code, it would only cost $3.75 to run CodeGuru Reviewer on it." I wonder if it's actually good enough to justify that price.
I don’t have much context, but I’ve never seen Amazon as a technical leader in the industry. They’re absolutely a business leader, and the services they provide can be good, but at a code level I’ve always thought of them as very MVP, if it works it’s good enough.
For code review services I’d expect a level far above this. Maybe they are able to do that, but I don’t have any existing positive bias towards this, and a few things against it.
Require cycles are allowed, but can result in uninitialized values. Consider refactoring to remove the need for a cycle.
metroRequire @ :8081/index.bundle?platform=ios&dev=true&minify=false:93
:8081/index.bundle?platform=ios&dev=true&minify=false:93 Require cycle: node_modules/aws-sdk/lib/react-native-loader.js -> node_modules/aws-sdk/lib/credentials/cognito_identity_credentials.js -> node_modules/aws-sdk/clients/cognitoidentity.js -> node_modules/aws-sdk/lib/react-native-loader.js
Require cycles are allowed, but can result in uninitialized values. Consider refactoring to remove the need for a cycle.
metroRequire @ :8081/index.bundle?platform=ios&dev=true&minify=false:93
:8081/index.bundle?platform=ios&dev=true&minify=false:28851 Warning: AsyncStorage has been extracted from react-native core and will be removed in a future release. It can now be installed and imported from '@react-native-community/async-storage' instead of 'react-native'. See https://github.com/react-native-community/react-native-async...
reactConsoleErrorHandler @ :8081/index.bundle?platform=ios&dev=true&minify=false:28851
:8081/index.bundle?platform=ios&dev=true&minify=false:93 Require cycle: node_modules/@aws-amplify/analytics/lib/Providers/index.js -> node_modules/@aws-amplify/analytics/lib/Providers/AWSKinesisFirehoseProvider.js -> node_modules/@aws-amplify/analytics/lib/Providers/index.js
Require cycles are allowed, but can result in uninitialized values. Consider refactoring to remove the need for a cycle.
metroRequire @ :8081/index.bundle?platform=ios&dev=true&minify=false:93
:8081/index.bundle?platform=ios&dev=true&minify=false:93 Require cycle: node_modules/@aws-amplify/predictions/lib/types/Providers/AbstractConvertPredictionsProvider.js -> node_modules/@aws-amplify/predictions/lib/types/Providers/index.js -> node_modules/@aws-amplify/predictions/lib/types/Providers/AbstractConvertPredictionsProvider.js
Require cycles are allowed, but can result in uninitialized values. Consider refactoring to remove the need for a cycle.
metroRequire @ :8081/index.bundle?platform=ios&dev=true&minify=false:93
:8081/index.bundle?platform=ios&dev=true&minify=false:93 Require cycle: node_modules/@aws-amplify/predictions/lib/types/Providers/index.js -> node_modules/@aws-amplify/predictions/lib/types/Providers/AbstractIdentifyPredictionsProvider.js -> node_modules/@aws-amplify/predictions/lib/types/Providers/index.js
Require cycles are allowed, but can result in uninitialized values. Consider refactoring to remove the need for a cycle.
metroRequire @ :8081/index.bundle?platform=ios&dev=true&minify=false:93
:8081/index.bundle?platform=ios&dev=true&minify=false:93 Require cycle: node_modules/@aws-amplify/predictions/lib/types/Providers/index.js -> node_modules/@aws-amplify/predictions/lib/types/Providers/AbstractInterpretPredictionsProvider.js -> node_modules/@aws-amplify/predictions/lib/types/Providers/index.js
Require cycles are allowed, but can result in uninitialized values. Consider refactoring to remove the need for a cycle.
metroRequire @ :8081/index.bundle?platform=ios&dev=true&minify=false:93
:8081/index.bundle?platform=ios&dev=true&minify=false:93 Require cycle: node_modules/@aws-amplify/predictions/lib/Providers/index.js -> node_modules/@aws-amplify/predictions/lib/Providers/AmazonAIPredictionsProvider.js -> node_modules/@aws-amplify/predictions/lib/Providers/index.js
That’s incredibly cheap, assuming it provides good suggestions. How much time does it take you to review 500 lines of code change, and what’s your time worth? If it takes 10 minutes and your time is worth about $20/hour or more, this service will part for itself immediately.
Our code reviews are far more "is this the right way to solve the problem?" than "hey, you never use that variable you declared." The latter would be picked up by our linter; I'm having a hard time seeing the value proposition here.
>It’s like having a distinguished engineer on call, 24x7
I don't believe that, regardless of how many times they sprinkle in the words "machine" and "learning".
I'd be very surprised if the service they've announced is a linter.
The announcement says it can even analyze parts your code that are more computationally expensive than they need to be. I'm not sure I understand the skepticism--surely they have among the largest code repositories in the world. Why couldn't they train models on it to look at best practices and even compare code practices to different metrics.
No - OP meant computationally expensive, not cognitively expensive. Two nested for-loops can be O(nˆ2) but can have a cyclomatic complexity as low as 1.
It may not be as good as a human, but I highly doubt it's a linter either. It's somewhere in between. If their claims are true, and it has been trained on hundreds of thousands of their own reviews, the AI could have picked up common patterns that are beyond lint but still real mistake a good reviewer would spot.
I agree 100%: if it provides good enough suggestions, it could pay for itself pretty easily on regular day-to-day PRs. (Although: not all 500 line PRs are made equal.)
My original comment was definitely unclear. I actually had two separate thoughts (that I didn't communicate well at all):
(1) if your team has occasional large, automated PRs (code generation, automated refactors, etc), you probably don't want to run this tool on them because of cost, so anyone that has these large PRs and uses CodeGuru probably needs to build a way into their automation to suppress CodeGuru (or build a way to invoke it for specific PRs)
(2) I also wonder if it's good enough to justify the price on regular PRs
We don't have many situation (1) PRs where I work now, but they do come up occasionally. For example, I've used IntelliJ IDEA's structural find-and-replace to do very large automated refactors where CodeGuru would be very expensive and probably provide little value. We also do check in some generated code (we usually don't do this, but there are a couple exceptions where we weighed the tradeoffs and decided checking in the generated code was a better solution, in our eyes).
Only if CodeGuru gets a lot of the value of a code review. But I think finding actual bugs is a pretty small of it.
A good code base is a team-created intellectual work. For that to happen, you need a ton of collaboration, shared learning, evolution of norms, interpersonal bonding, and practice of key social behaviors (e.g., principled negotiation, giving good feedback, recognizing and rewarding good actions). Automated code review gets at none of that.
I disagree. I regularly review PRs with more than 500 changed lines/20+ changed files. I read every single line. I put the same amount of effort into reviewing code as I do writing it; every software engineer should.
You don't replace the human code review. You supplement it. An AI that can replace the code review would need to be an AGI that "sat" down with your team and understood the architectures and meaning in your code. If it can't say "WTF?" it can't do a full code review.
You really believe this will free you from traditional code reviews? I would treat it as an advanced linter, and from the pictures looks like that's how it integrates itself in github.
Yeah or looking at it the other way around. If this would replace your human reviewer, then maybe you would do good to have some serious discussions with your HR department...
Now that I think of it, if it's paid by lines of code, it perversely incentives people to minimize the lines of code, no? Does it count white space and comments? Can I minify my code before passing it to this, then unminify it?
Many languages can have code written in them minimized down to a single line. I guess they must have a character count number equals a line qualifier somewhere.
I was thinking about the same thing. Even for a small project of 3 developers, it seems like this would rise easily to the $100+/month, for suggestions that may not even be that useful.
If another developer reviewed your code, how much time it would it take them, and how much is that time worth?
If you divide the 50 bucks by that number, you get a cost ratio. If it's lower than the ratio of (benefit expected by automatic code review) / (benefit expected by manual code review), it's worth using.
I guess we can speculate all we want; in the end, only experience will show if the service is worth it or not.
Nope, it's mostly code. It's a gnarly PR that atomically delivers a feature (and removes the feature that the new one replaces), but this looks about right for my weekly productivity overall, except I usually submit it in smaller PRs, and the delta is mostly lines added.
This seems like a good incentive not to be generating thousands of lines of codes with each PR, which most would probably consider a feature as opposed to a bug.
If you split the same number of lines over two, three, ten, etc PRs it still costs the same. If anything it is incentivising code-golf via line minimization.
I'm not a fan of this. My team does this a lot and invariably it leads to having stuff be unfindable because the source you really want is in your build directory and not in code search. Which is annoying but manageable if I build the project, but partners who don't will have an even harder time of things.
More recent efforts have us check in generated code alongside the "config" files, and automated processes ensure you check in the generated code if you touch the config file. It's much better this way.
We generate source code into a "src/main/generated" directory. IntelliJ picks it up like any other source code with proper Gradle configuration and we ignore it with .gitignore.
Only downside we've found is it can be a pain with searching for references in Github and you have to remember to generate the code, but for the most part it is seamless.
I guess how you manage it depends on your IDE, if you can configure it to work nicely, and how much the generated source changes/needs to be read.
One exception case in our project is documentation- parts of it, like the index, are generated prior to commit. We like having the docs updated in the same commit as the change so there's never an opportunity for mismatch.
Generated code is an artifact of the source code. If you need it for something specific, regenerate it from the source when you pull that from your version control system. You're not getting any benefit by storing something that can be generated alongside the means to generate it.
Thank you for your response. The advantages you get:
* Hermetic builds are faster because code-gen only occurs when changes occur in the base code
* Lots of docgen tools don't support incremental compilation
* Diffs in generated code show up as diffs when you change the code-gen tool, easier to isolate changes that occur if your code-gen tool is upstream (say you want entire org on Thrift 0.9.2 from Thrift 0.8)
Downsides I can see:
* Large repo.
* Source of truth is now the generated code, not the source, so someone else using the source could get a different result.
Essentially acting in an empirical mode of operation (i.e. does it provide benefits for cost), and ignoring any philosophical objections, this seems like it could go either way depending on the situation.
> Source of truth is now the generated code, not the source, so someone else using the source could get a different result.
This seems to apply to the other side, I think. Generating from source with different tooling or tool versions could create different results, whereas using the generated code guarantees consistent behavior.
Tools like this should be built into your IDE. No developer ever wants automated feedback at the end of the process in a code review.
There are lots of academic ML review/suggestion tools. Those people come to the table with trials and statistics to assess the quality of their results. Amazon probably copied one of those papers, added a rules-engine to recommend their own APIs, and slapped a hefty price tag on it.
> I wonder if it's actually good enough to justify that price.
If it can spot a lot of issues (performance, security, bug, etc), $3.75 is definitely a good deal to do it once a while but not on every single changes (e.g. fixing a typo in the code comment)