It can be, it isn't necessarily. And I don't care about small values of n. If I'm spinning through two lists of size 5 it doesn't matter if option A is slightly faster than option B, both options will run on the order of nanoseconds. The lower time complexity solution will continue to be reasonably fast as the input grows, pretty soon the difference will be measured in seconds, minutes, hours, days... Using a worse complexity solution for small inputs is a micro optimization you can use some times but it is a micro optimization. Using the best time complexity is what I like to call a macro optimization. It's the default solution. You might deviate from it at times, but you're much better off using complexity to guide your decisions than not doing it. Once you know what you're doing you can deviate from it when appropriate, some times worse complexity is better in specific cases. But 99.9% of the time, best time complexity is good enough either way.
I usually don't need the code I write to be optimal. It doesn't matter if it's a bit slower than it technically could be - as long as it's fast enough and will continue to be fast when the input grows.
Some times you may want to squeeze the absolute maximum performance and in that case you may be able to micro optimize by choosing a worse complexity solution for cases where you know the inputs will always be small. If that assumption ever breaks, your code is now a bottle neck. It can be useful thing to do in certain niche situations, but for the vast majority of code you're better off just using the best complexity solution you can. Otherwise you may come back to find that the code that ran in a millisecond when you profiled it, now takes 20 minutes because there's more data.
How do you run a profiler in CI? Do you hook it up to your tests? You need some code that runs with actual input so I guess you either profile the test suite or write tests specifically for profiling? Or maybe you write benchmarks and hook a profiler up to that?
This sounds like it could be a useful technique but very time consuming if you have to write the profiler tests/benchmarks specifically, and kind of useless if you just hook it up to your general test suite. I want my tests to run fast so I don't use big data for testing, and you won't really see where the bottlenecks are unless you're testing with a lot of data.
You're right, "can" is the right word. But you got hyperfixated on that and ignored the rest of what I said. It's not just small n as in 5. It can be hundreds or thousands and this can depend on the constants that people usually drop when doing complexity analysis.
Essentially, I'm saying use the profiler to double check your estimates. Because that's what (typical) complexity analysis is, an estimate. But a profiler gives you so much more. You can't always trust the libraries ands even when you can you need to remember liberties have different goals than you. So just grab a profiler and check lol. It isn't that hard
As for connecting to a CI, you're over thinking it.
How do you normally profile your code? Great! Can you do that programmatically? I bet you can. Because you're profiling routines.
You're already writing the test cases, right? RIGHT?
Btw, you can get the CI to work in nontrivial ways. It doesn't have to profile every push. You could, idk, profile every time you merge into a staging branch? You can also profile differently on different branches. M There's a lot of options between "every commit" and "fuck it, do it in prod". I'm sure you're more than smart enough to figure out a solution that works for your case. Frankly, there's no one size fit's all solution here, so you gotta be
Even if it is hundreds or thousands the difference will generally be imperceptible to humans. And that's all I care about, I care about how humans experience my software. If some background job that runs nightly takes 2 hours that's fine by me. I'll write it to be efficient and I've never actually written a job that takes anywhere near that long to run, but if I did I probably wouldn't waste time and energy optimizing it unless asked to. I've seen jobs written by others that take hours to run, and I haven't done anything about it because nobody's asked me to - because nobody cares.
I mostly work on web apps. If I was working on an application with a large user base and it was struggling to keep up with a large number of requests during peak hours, I might try to optimize the most frequently used or the most performance sensitive endpoints to ease the load. Then I might profile a call to these endpoints to see where I should focus my efforts. But if the app is responding quickly during peak traffic and generally just working perfectly, I see no reason to spend extra time profiling things just for the sake of it. I'm not paying the cloud bills and the people who are aren't asking me to reduce them. Realistically it would probably take years or decades to recoup the cost of having me investigate these things anyway, it's not worth it.
Yes, I already write tests. But like I said in one of my earlier responses, those tests exist to test functionality not performance. For example the test dataset might run super fast with your O(n^2) algorithm but the prod dataset might be much larger and take hours to run. Profiler won't tell you that unless you have a test with a huge dataset to provoke the issue. So now you're writing specialized tests just for profiling, which in my opinion falls under premature optimization. It also makes your test suite take longer to run which makes you less likely to run the tests as often as you otherwise would.
I'd rather just go with the low complexity option and revisit it later if necessary. I very rarely have any issues with this approach, in fact I'm not sure I have ever had a performance problem caused by my code. If there's code that stalls the application it's always someone else's work. My code isn't perfect, but it's generally fast enough. In my mind that's pragmatic programming - make it work, make it clean, make it fast enough. Most code doesn't need to be anywhere near optimal.
It can be, it isn't necessarily. And I don't care about small values of n. If I'm spinning through two lists of size 5 it doesn't matter if option A is slightly faster than option B, both options will run on the order of nanoseconds. The lower time complexity solution will continue to be reasonably fast as the input grows, pretty soon the difference will be measured in seconds, minutes, hours, days... Using a worse complexity solution for small inputs is a micro optimization you can use some times but it is a micro optimization. Using the best time complexity is what I like to call a macro optimization. It's the default solution. You might deviate from it at times, but you're much better off using complexity to guide your decisions than not doing it. Once you know what you're doing you can deviate from it when appropriate, some times worse complexity is better in specific cases. But 99.9% of the time, best time complexity is good enough either way.
I usually don't need the code I write to be optimal. It doesn't matter if it's a bit slower than it technically could be - as long as it's fast enough and will continue to be fast when the input grows.
Some times you may want to squeeze the absolute maximum performance and in that case you may be able to micro optimize by choosing a worse complexity solution for cases where you know the inputs will always be small. If that assumption ever breaks, your code is now a bottle neck. It can be useful thing to do in certain niche situations, but for the vast majority of code you're better off just using the best complexity solution you can. Otherwise you may come back to find that the code that ran in a millisecond when you profiled it, now takes 20 minutes because there's more data.
How do you run a profiler in CI? Do you hook it up to your tests? You need some code that runs with actual input so I guess you either profile the test suite or write tests specifically for profiling? Or maybe you write benchmarks and hook a profiler up to that?
This sounds like it could be a useful technique but very time consuming if you have to write the profiler tests/benchmarks specifically, and kind of useless if you just hook it up to your general test suite. I want my tests to run fast so I don't use big data for testing, and you won't really see where the bottlenecks are unless you're testing with a lot of data.