3
yehaaw
3y

Recent posts from @kiki and others made me think about tests. So what are your 2 cents regarding integration tests?

Comments
  • 4
    Test and test well, lest ye crash in production.
  • 5
    My first question to you is, if you go look at some of the "unit tests" you've written, ask yourself "could you have written them without knowing how you were going to implement the solution?" Because if the answer to that question is no, it's not a unit test. I don't fully agree with TDD, but I think there's some logic to it.

    You should code against abstractions and interfaces, not implementations. Look at SOLID. If you have a unit test for a parent class, any and all subclasses should pass those tests, otherwise you're not following SOLID. That's Liskov substitution.

    Me and Kiki disagree, but I think we share a reasonable amount of common ground. A lot of "unit tests" aren't.

    They're probably either regression tests that tell you if something has changed (which has its uses, but not very interesting to me).

    Or they're integration tests (where you write lots of stubs based on your understanding of some libraries, then test that the implementation based on your understanding of those libraries matches the stubs based on your understanding of those libraries, which it does, but doesn't tell you if you're right).

    The only good, useful and accurately named unit test should be something you can write without needing to know how something is written.

    Then there's Kiki's argument that a comprehensive set of tests describes the application, I disagree because there's a difference between emergent behaviour and rules defining that behaviour. But that kinda goes back to the interface/liskov point.

    I'm not saying don't do any of these types of tests. But I do think you should understand what you're doing and why.
  • 2
  • 1
    Should we let @Kiki know?
  • 0
    And the reason I don't fully agree with TDD is that, again, there's a difference between implementation and interface. If you have to have 100% test coverage, if you only write code to make failing tests pass, you're almost certainly failing Liskov substitution.

    You probably debug with print statements, or have some api to visualize data. Testing that you're doing that correctly is pointless, because that code is there to help write code.
  • 0
    @Demolishun Shhhhhh 🤫 it was a long debate last time.

    We've debated this before, discussion here: https://devrant.com/rants/4751612/...
  • 1
    If you need tests to make sure you didn't forget something, well, how can you be sure you didn't forget to test for that particular case as well?

    It's actually more common than you might think; someone writes tests, following everything @atheist said, but they forget to test for many edge cases. They write the code which implicitly handles most (all) edge cases, it gets 100% test coverage, QA gives it a pass, everyone's happy. Then later someone needs to refactor that code (perhaps for better performance), making sure the original tests still pass. QA doesn't pay too much attention since it's just refactoring and the tests haven't changed, but the edge case logic has been lost in the process...

    If you're adding a safety net, make sure it won't fail when you need it the most; otherwise it's better to not waste your time with a poor safety net and instead take better care you won't need it at all.
  • 1
    Also, UT, IT, FT, and E2ET are only auto-compliance methods... Which is cool, necessary BUT NOT sufficient!!!

    Exploratory tests, sometimes monkey testing, and chaos testing are needed to have real Quality Assurance (and segregation of responsibilities enforced between the "implementer" and the "tester").

    That's the difference between ontology and epistemology...
  • 0
    Well I am a fan of different types of tests. There are cases where I definitely want unit tests which test just a single function (e.g. tax calculation inside a cart). Most of the time I stay with integration or e2e tests because they not only test the actual functionality the software provide but also let me refractor decent parts of an application without rewriting a lot of tests.
  • 0
    I don't know if this qualifies as testing, but I made a test project to test out a node system I built. Being a much smaller app than my main project, it allowed me to diagnose some things with shorter compile times. It also provided a small reference as to how to use the node system.
  • 1
    I hadn't much contact with testing as in "not my beer".

    What I despise is when tests aren't properly isolated and do not follow a clean workflow....

    E.g. letting integration tests running on prod rampage, having different configuration | environment | ..., Workflow of integration test is seperate of workflow of releasing artifacts (you can have a released artifact despite testing gone wrong), not clearly separating shared resources (e.g. databases, memory, folders, ...) and so on.

    When people tell me explicitly that their test suite is great, but it took prod down... i really want to take a dildo, write 'I tested smart, I went full retard' on it and stick it to their forehead.

    I call this marvelous beast the testicorn.

    ... Yes.... I had many testicorns in my life which might be a reason I never touched test suites myself, too much PTSD. I'd rather play with hardware.
  • 0
    @IntrusionCM ++ for testicorn
  • 1
    Just a quick reminder than putting an obscene amount of mayonnaise in all the food you eat doesn’t make you a bad person.

    It’s just like people will prefer to interact with you as little as they possibly can.
  • 0
    Almost always want to implement at least some level of integration testing.

    You can have a perfect suite of unit tests to prove all your code works in isolation, but until you run at least some level of integration testing you'll never guarantee those units work together as expected.
Add Comment