8
Comments
  • 3
    Real devs test on production.
  • 1
    And with TDD we're not?
  • 1
    @cafecortado in my experience the discipline given my team and I and footing in the fight since we adopted it as a department. I've definitely tamed some evil code - even when I'm just working on BAU tickets.
    Much more confidence in not breaking shit and the speed of refactoring is 💦💦💦
  • 7
    with TDD, we're all legacy test's bitch.
  • 1
    Tdd is a way to make you write twice as much janky code as necessary
  • 2
    TDD is nice, but I disagree that it has a large benefit long term. In my experience, tests and their framework slowly gets abandoned as a repo ages. Devs will naturally ignore the testing as more and more bandaids are added. If your team plans to keep the repo for years and years, TDD definitely makes sense. That’s why I usually use e2e bc it’s more generalized on output and you don’t have like 1000+ tests.
  • 2
    Works only if you don't have tight deadlines and breathing space in between builds
  • 2
    Controversial opinion: TDD only covers cases the author thinks of during their initial implementation - quite often the implementation changes during development but TDD can lock you into not updating the test cases (beyond enough to make em pass) and another dev who comes in later often has a better idea of what would be the most relevant stuff to test.

    And often turns out when something breaks they never ever read our test cases, they’ve gone obsolete long ago, so ”test as documentation” often doesn’t really work
  • 0
    …to clarify: this is of course not anti-testing

    Just talking about writing tests before you start implementing (which is TDD) vs writing the same amount of tests after the code is done
  • 0
    TDD is just a smoke screen
  • 0
    Interesting @jiraTicket. Why do you think your team is hesitant to modify existing tests properly to drive the system's behaviour change to whatever it now needs to be?
    What caused the refactoring step to be skipped that creates all this waste?

    My initial spidy-senses:
    If there are tests that don't supply more use than they cost, then teams should collectively delete them and learn how the waste was created

    If it's because the tests are too difficult to refactor - maybe they're too big, complicated, and/or using too many mocks. This is usually the result of the code under test doing too much and the tests are mirroring that complexity (a lot of the same code smells apply in tests as they do non-test code).
    There are lots of cheap and clean refactorings to tackle this kind of problem without breaking behaviour + increasing scope creep

    Do the tests take too long to run?
  • 1
    @gymmerDeveloper I've found I'm much faster than I was before adopting it because I'm not manually testing all the time + I get an ever increasing safety net as I go. My team's productivity has definitely increased as well - with metrics to prove it despite the same deadlines and restrictions

    Also:

    Sure we find holes in our coverage eventually but we usually get caught by surrounding tests before we fall and hurt ourselves.

    I have yet to cause an issue in Production or our sandbox environments since I adopted testing (years ago). Our releases are faster, more frequent and stable than they've ever been despite huge growth in our team size and restructuring. And dev onboarding has never been so quick.

    I also find it really fun trying to solve the challenge of trying to prove a system helps someone in the way we need it too that's both sustainable, efficient, and useful to the next dev
  • 1
    @MammaNeedHummus the tests run very fast. The test code is minimal, we find it well structured and easily modified. The team all spends a lot of time refining the tests so it's quite well maintained.

    One reason for not doing TDD can be that our tasks often are a bit adhoc and can be done in multiple steps as separate PR:s. We can start a feature not entirely certain how the final output (we mostly test server-side rendered html) will end up, doing a bit of experimentation to decide with which components to use etc. And finally end up doing a smaller portion of the feature as a separate PR. And then we just find doing TDD along the way doesn't really help us.

    To a large extent i also guess it's just a progress vibe: some are more comfortable with going back and forth between green and red tests, and implementation. Personally that workflow doesn't feel fun to me.
  • 1
    @tosensei so then you need tests to test the libs that are used for tests... wait a minute.
  • 0
    @PepeTheFrog yeah... it grows uncontrollably.

    in a way, it's TESTicular cancer.
  • 1
    A follow up to my last post. This is a better argument than my previous ones:

    My tests are offline and designed around the idea of using mocked api-data as Input.

    In my test code I want the most minimal possible mocked data. (to make it easy to read what I'm testing in each case)

    But during the development of a feature I wanna dabble around with real api data.

    That's why it's way easier to start developing against the real api, to see lots of various types of data. When I'm done I'll be able to write the test with a minimal amount of mock data.

    So doing TDD is not really an reasonable option here as I often dabble around with real api data in my debugger to see what that api data looks like. But my tests are entirely offline.
  • 1
    … as an example: our api has added a new field: article.richText. My task is to render it. Then Ineill start with a naive implementation, browse around my local dev site to see what it looks like and if there are any quirks - I might discover some fields contain formatting that doesn’t render as expected. I will gradually update my implementation. And in the end I will add tests - after having an idea of what the default path and edge cases to look out for would be.
    I’ve tried doing this with a TDD approach but just felt it was super slow because I gradually discover stuff about the input and output that makes my initial test case assumptions useless
  • 1
    @jiraTicket good point we haven't been able to solve problem entirely at my work either and I'm really keen to learn about what's been done so far to "fix" it.

    We have some integration tests to third party API but it's crazy how broken, unreliable, and different-from-production their sandbox environments are which has caused issues in the past (have since had to replace affected tests with mocks).

    There's a brilliant blog post about minimising mocked data btw that I recommend giving a read and stab at trying if you're interested: https://jamesshore.com/v2/projects/...

    I've also heard from a Dev lead I respect say that they would use container services like this to emulate parties that offered no workable sandbox to integrate to: https://www.mock-server.com/

    Anyone else had experience solving this?
  • 1
    @jiraTicket re that rich text issue: you can defo prototype ideas and hack around - TDD is never about stopping that.

    But prototypes (who's only goal is just to fling something together to answer some unknowns and better your footing) are supposed to be disposable and aid in _guiding_ the real fleshed out feature.

    That real fleshing out is were TDD comes in

    The Pragmatic Programmer's chapter on Prototypes has some better ways of putting this than me
  • 0
  • 1
    BS. TDD, DDD and whatever shit is overrated and useless.

    I did well for 11 years just having specifics, writing code and testing once finished or in medium sized chunks but not writing tests in advance.

    We did this for software for the main railways companies, for huge pharma companies and so and it all went well.

    We are doing TDD on an angular project (more or less) now and it's a fucking nightmare and it's useless.

    It always have been and it always be.

    It's a buzzword for uncapable devs.
  • 0
    TDD for front end?

    It doesn't scale well to robot tests
Add Comment