Hacker News new | past | comments | ask | show | jobs | submit login

According to the Wikipedia article increased cyclomatic complexity leads to harder test code and the increased likelihood of more defects.

I'm curious how the analysis in the blog post works. In languages like Python it's common to hide branches inside of data structure lookups like the following:

  def f(): 
     ...

  def g():
     ...

  def h(x):
   return {
    True: f,
    False: g}[not x]()



Slightly off-topic but why would you build up a dictionary in h(), then index it, then call the function being returned? The dictionary must be garbage collected later too. To me this seems rather inefficient compared to a simple if/then/else. I think readability also suffers.


It's likely the parent has listed this way for brevity of the post, while still trying to show the potential for complexity, which I believe is the key point here and i would agree with it.

The dict-as-a-switch expression is common to python, but it can be masked as the parent has shown in larger examples.

I have no idea how we could apply static analysis techniques to a construct like this. To be honest I tend to avoid this approach on readability grounds. It could be cleaned up with a dict subclass but I think inlining the dict call mechanics is simpler to read, even if it produces larger code, leads to repetition and couples you to this approach (which would be a strong consideration in the case of writing a library - where this type of code is most prevelant).


Due to the dynamic nature of python, static analysis to determine CC is hard.

CC is useful for determining number of tests for complete branch/path coverage if you don't pull out the switch on dictionary tricks (among others).




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: