7

I've looked at code I've writte and on average I fix one bug (minor) every 10-20 lines.

Is this normal, subpar, or good for a beginner?

Comments
  • 4
    I’d have to imagine this varies a lot by language. Python, which has basically zero boilerplate, would likely have a very different rate of errors per line than Java or C#, which requires a lot of boilerplate
  • 1
    What do you mean fixing bugs every x lines of code? You go through the code base file by file and figure out something is a bug? That is your own code or someone else's?

    No matter the metric, bugfix efficiency rate will also depend on the code base quality, complexity, language and scope so it's hard to give an estimate without a reference.
  • 3
    It's hard to measure quality by "some quantity".

    If you review your code and notice bugs it's fine.

    Most of programming is developing a muscle memory in form of getting an instinct telling you when code " smells ".

    Sometimes I point at a code and say: It's there without knowing why. Coworkers start laughing, hours later it turns out that indeed a bug was there.

    I can be completely wrong of course :) But in most cases, it's this instinct that has saved me countless times.

    It just develops over time and reviewing is a good training
  • 4
    It doesn't make any sense as a metric.

    Even ignoring language differences, bugs aren't always one line fixes, heck they're not always fixed in one place. Some require refactorings across the board. Some require methods to be torn down and rewritten.

    Also depends what code you're writing. If it's simple crap to serve a file or forward on a call, I'd expect that to go swimmingly. If you're doing complicated algorithmic stuff and have discovered some edge cases that are buggy, that's expected and par for the course.

    No-one talks about bugs every X lines though. It's not a thing.
  • 1
    Then I'm really showing how green I am!

    Bug fixing is probably the most enjoyable thing I have ever done, hands down. Original code? Meh.

    Naturally I was hoping for a way to measure performance against some industry metric. *shrugs*.
    Kinda surprising that there isnt one.
  • 2
    @Wisecrack I tried to find a phrase to not get your hopes up.

    There are several projects, from static code analysis (-algorithms) to testing coverage to bla who try to measure stuff.

    In my opinion it's barking up the wrong tree.

    "With great power comes great responsibility".

    Most of these measurements are an _indicator_ - but like all things, some humans try to use them as an fact. Which ends bad TM.

    Best examples are eg Coverity (static analysis, defects per lines of code) or the all time favorite "test coverage in percent".

    Please. Don't go that way. :)

    Some projects go so far to include reports like these (shiny numbers) in their marketing.... LibreOffice eg.

    (Static) code analysis and test coverage are biased. I think that's the best way to describe it - they're useful, but cannot represent wether a code base is good or bad.

    Examples: If you've 100 % test coverage, but each test is a noop... Then the Tests are crap. The 100 % has no meaning at all. ;)

    (Static) code analysis is good for spotting hard to catch errors. But code is more than just logic. As it is written by humans, it must be understand by humans.

    You cannot derive this with algorithms.

    LibreOffice is a good example. They've near zero Coverity range, but their codebase is still a hotglued mess. Nothing personal here. But the LO code base is really an enormous beast. The community is extremely friendly and helpful - but back to topic. Due to the complexity, LibreOffice still has many bugs - and due to being an project that tries to deduct / parse incoming files with completely broken formats (looking at you MS).

    These two facts (complexity and what LibreOffice in a nutshell is - an extremely complex document parsing system) cannot be represented by an algorithm.

    Reason for the length of explanations: whenever an controller or manager has the very wrong idea in his head to measure an employees "performance" by metrics like that, I must intervene. Happened very often in the last years...
  • 1
    TLDR: there are useful tools that can measure certain aspects of an codebase, but they are not representative without a good understanding of the project.

    As such, they don't represent a valid, context-free measurement of the code base.
  • 0
    @IntrusionCM this has been a fantastic and well written post from someone who soubds experienced at their craft. Thank you for taking time out of your day to correct the record like this.
Add Comment