34
R5on11c
7y

When debugging and testing programs with huge databases, i acidentally start breaking Records in this company.
Had my biggest curl command with 5000 UUIDs (which just made my terminal crash, never happened before), my longest curl request (over an hour) and now Github ... what were your most ridiculous proportions in coding?

Comments
  • 1
    Mine? Nowhere near those numbers
  • 6
    When I had just started with mysql and didn't know about indexing, I ran joins over two tables: one with about 30k and one with over 300k records. Queries took anywhere between 4 - 8 hours usually.
    Did this two times, then did some research, found out about indexing, and cut down the duration to a minute.
  • 3
    Sure that's not whitespace or line ending changes somehow being committed? The added and removed count are oddly close.

    Or moved files without using git mv?
  • 0
    @ xsacha a changed a json file as the base of a database. I added the last entry at the end so the lines wouldn't shuffle all downwards. The only thing i can imagine that has happened is the removal of carraige returns or something. I use VIM and it automatically gets rid of those.
  • 0
    @xsacha I never knew about git mv
  • 5
    Most ridiculous thing I've ever had to deal was, was a 156TB (not a typo) postgres database. The biggest table was about 12TB.

    It contained lots of time series measurement stuff coupled to raw dicom-like layered data (ultra high resolution, 3-dimensional scan data, through time). Each layer and timeframe was broken up in the database, and enriched by recognition software to automatically mark deviations.

    You could query the journey of a set of bolts through the factory, get a table of all historically applied torque values, and check how it caused a fracture which caused a piece to fail during a stress test — and usually it wouldn't be needed because the system warned before the test.

    QA in the aerospace industry is crazy.
  • 0
Add Comment