Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "rerun"
-
"sudo !!" Will rerun your last command with sudo privileges in a Linux environment.
You're welcome.34 -
Each month my department compiles a 4M row 150 column data table for compliance with a federal agency. Before submitting, we check it against about 400 rules.
The existing system was simply 400 queries that ran in sequence, table-scanning 4M rows each time, taking upwards of 6 hours, which is a huge bottleneck, especially if you have to make changes and rerun. Plus the output was rather one-dimensional.
I built a proper normalized database and created a sort of rules engine, running all 400 rules in one table scan. Not only does it complete in 30 minutes, but the reports generate automatically, and the results can be filtered on several dimensions to aid with root-cause analysis.
Management was pleased.4 -
So, some time ago, I was working for a complete puckered anus of a cosmetics company on their ecommerce product. Won't name names, but they're shitty and known for MLM. If you're clever, go you ;)
Anyways, over the course of years they brought in a competent firm to implement their service layer. I'd even worked with them in the past and it was designed to handle a frankly ridiculous-scale load. After they got the 1.0 released, the manager was replaced with some absolutely talentless, chauvinist cuntrag from a phone company that is well known for having 99% indian devs and not being able to heard now. He of course brought in his number two, worked on making life miserable and running everyone on the team off; inside of a year the entire team was ex-said-phone-company.
Watching the decay of this product was a sheer joy. They cratered the database numerous times during peak-load periods, caused $20M in redis-cluster cost overrun, ended up submitting hundreds of erroneous and duplicate orders, and mailed almost $40K worth of product to a random guy in outer mongolia who is , we can only hope, now enjoying his new life as an instagram influencer. They even terminally broke the automatic metadata, and hired THIRTY PEOPLE to sit there and do nothing but edit swagger. And it was still both wrong and unusable.
Over the course of two years, I ended up rewriting large portions of their infra surrounding the centralized service cancer to do things like, "implement security," as well as cut memory usage and runtimes down by quite literally 100x in the worst cases.
It was during this time I discovered a rather critical flaw. This is the story of what, how and how can you fucking even be that stupid. The issue relates to users and their reports and their ability to order.
I first found this issue looking at some erroneous data for a low value order and went, "There's no fucking way, they're fucking stupid, but this is borderline criminal." It was easy to miss, but someone in a top down reporting chain had submitted an order for someone else in a different org. Shouldn't be possible, but here was that order staring me in the face.
So I set to work seeing if we'd pwned ourselves as an org. I spend a few hours poring over logs from the log service and dynatrace trying to recreate what happened. I first tested to see if I could get a user, not something that was usually done because auth identity was pervasive. I discover the users are INCREMENTAL int values they used for ids in the database when requesting from the API, so naturally I have a full list of users and their title and relative position, as well as reports and descendants in about 10 minutes.
I try the happy path of setting values for random, known payment methods and org structures similar to the impossible order, and submitting as a normal user, no dice. Several more tries and I'm confident this isn't the vector.
Exhausting that option, I look at the protocol for a type of order in the system that allowed higher level people to impersonate people below them and use their own payment info for descendant report orders. I see that all of the data for this transaction is stored in a cookie. Few tests later, I discover the UI has no forgery checks, hashing, etc, and just fucking trusts whatever is present in that cookie.
An hour of tweaking later, I'm impersonating a director as a bottom rung employee. Score. So I fill a cart with a bunch of test items and proceed to checkout. There, in all its glory are the director's payment options. I select one and am presented with:
"please reenter card number to validate."
Bupkiss. Dead end.
OR SO YOU WOULD THINK.
One unimportant detail I noticed during my log investigations that the shit slinging GUI monkeys who butchered the system didn't was, on a failed attempt to submit payment in the DB, the logs were filled with messages like:
"Failed to submit order for [userid] with credit card id [id], number [FULL CREDIT CARD NUMBER]"
One submit click later and the user's credit card number drops into lnav like a gatcha prize. I dutifully rerun the checkout and got an email send notification in the logs for successful transfer to fulfillment. Order placed. Some continued experimentation later and the truth is evident:
With an authenticated user or any privilege, you could place any order, as anyone, using anyon's payment methods and have it sent anywhere.
So naturally, I pack the crucifixion-worthy body of evidence up and walk it into the IT director's office. I show him the defect, and he turns sheet fucking white. He knows there's no recovering from it, and there's no way his shitstick service team can handle fixing it. Somewhere in his tiny little grinchly manager's heart he knew they'd caused it, and he was to blame for being a shit captain to the SS Failboat. He replies quietly, "You will never speak of this to anyone, fix this discretely." Straight up hitler's bunker meme rage.13 -
I have been gone a while. Sorry. Workplace no longer allows phones on the lab and I work exclusively in the lab. Anyway here is a thing that pissed me off:
Systems Engineer (SE) 1 : 😐 So we have this file from the customer.
Me: 😑 Neat.
SE1: 😐 It passes on our system.
Me: 😑 *see prior*
Inner Me (IM): 🙄 is it taught in systems engineer school to talk one sentence at a time? It sounds exhausting.
SE1: but when we test it on your system, it fails. And we share the same algorithms.
Me: 😮 neat.
IM: 😮neat, 😥 wait what the fuck?
Me: 😎 I will totally look into that . . .
IM: 😨 . . . Thing that is absolutely not supposed to happen.
*Le me tracking down the thing and fixing it. Total work time 30 hours*
Me: 😃 So I found the problem and fixed it. All that needs to happen is for review board to approve the issue ticket.
SE1: 😀 cool. What was the problem?
Me: 😌 simple. See, if the user kicked off a rerun of the algorithm, we took your inputs, processed them, and put them in the algorithm. However, we erroneously subtracted 1 twice, where you only subtract 1 once.
SE1: 🙂 makes sense to me, since an erroneous minus 1 only effects 0.0001% of cases.
*le into review board*
Me: 😐 . . . so in conclusion this only happens in 0.0001% of cases. It has never affected a field test and if this user had followed the user training this would never have been revealed.
SE2: 🤨 So you're saying this has been in the software for how long?
Me: 😐 6 years. Literally the lifespan of this product.
SE2: 🤨 How do you know it's not fielded?
Me: 😐 It is fielded.
SE2: 🤨 how do you know that this problem hasn't been seen in the field?
Me: 😐 it hasn't been seen in 6 years?
IM: 😡 see literally all of the goddamn words I have said this entire fucking meeting!!!
SE2: 😐 I would like to see an analysis of this to see if it is getting sent to the final files.
Me: 🙄 it is if they rerun the algorithm from our product. It's a total rerun, output included. It's just never been a problem til this one super edge case that should have been thrown out anyway.
SE2: 🤨 I would still like to have SE3 run an analysis.
Me: 🙄 k.
IM: 😡 FUUUUUUUUUCK YOOOOOU
*SE3 run analysis*
SE3: 😐 getting the same results that Me is seeing.
Me: 😒 see? I do my due diligence.
SE2: 😐 Can you run that analysis on this file again that is somehow different, plus these 5 unrelated files?
SE3: 😎 sure. What's your program's account so I can bill it?
IM: 😍 did you ever knooooow that your my heeeerooooooo.
*SE3 runs analysis*
SE3: 😐 only the case that was broken is breaking.
SE2: 😐 Good.
IM: 🤬🤬🤬🤐 . . . 🤯WHY!?!?
Me: 😠 Why?
SE2: 😑 Because it confirms my thoughts. Me, I am inviting you to this algorithm meeting we have.
Me/IM: 😑/😡 what . . . the fuck?
*in algorithm meeting*
Me: 😑 *recaps all of the above* we subtract 1 one too many times from a number that spans from 10000 to -10000.
Software people/my boss/SE1/SE3: 🤔 makes sense.
SE2:🤨 I have slides that have an analysis of what Me just said. They will only take an hour to get through.
Me: 😑 that's cool but you need to give me your program's account number, because this has been fixed in our baseline for a week and at this point you're the only program that still cares. Actually I need the account to charge for the last couple times you interrupted me for some bullshit.
*we are let go.*
And this is how I spent 40+ useless hours against a program that is currently overrunning for no reason 🤣🤣🤣
Moral: never involve math guys in arithmetic situations. And if you ever feel like you're wasting your time, at least waste someone else's money.10 -
Half way through a 2 hour deployment, and a fucking test fails.
Yes, it takes 2+hours to run the tests.
Rerun in test environment: pass
Rerun in prod: pass
Rerun changed test in prod: fail..
Why, why you got to hate me for?
I love it when production is the place where config gets changed.rant tests are good multi environment changes tests can go fuck off right now tests save lives inconsistencies hurt5 -
Is this learning job cpu intensive or memory intensive?
I don't know and I don't give a flying fuck, because it's 6:20pm and I have not found any of my favorite servers free to rerun this shit the whole fucking week, so this server (which I have actually killed before, btw) can suck a dick and do its fucking job.
🎤🖐️11 -
I'm going on vacation next week, and all I need to do before then is finish up my three tickets. Two of them are done save a code review comment that amounts to combining two migrations -- 30 seconds of work. The other amounts to some research, then including some new images and passing it off to QA.
I finish the migrations, and run the fast migration script -- should take 10 minutes. I come back half an hour later, and it's sitting there, frozen. Whatever; I'll kill it and start it again. Failure: database doesn't exist. whatever, `mysql` `create database misery;` rerun. Frozen. FINE. I'll do the proper, longer script. Recreate the db, run the script.... STILL GODDAMN FREEZING.
WHATEVER.
Research time.
I switch branches, follow the code, and look for any reference to the images, asset directory, anything. There are none. I analyze the data we're sending to the third party (Apple); no references there either, yet they appear on-device. I scour the code for references for hours; none except for one ref in google-specific code. I grep every file in the entire codebase for any reference (another half hour) and find only that one ref. I give up. It works, somehow, and the how doesn't matter. I can just replace the images and all should be well. If it isn't, it will be super obvious during QA.
So... I'll just bug product for the new images, add them, and push. No need to run specs if all that's changed is some assets. I ask the lead product goon, and .... Slack shits the bed. The outage lasts for two hours and change.
Meanwhile, I'm still trying to run db migrations. shit keeps hanging.
Slack eventually comes back, and ... Mr. Product is long gone. fine, it's late, and I can't blame him for leaving for the night. I'll just do it tomorrow.
I make a drink. and another.
hard horchata is amazing. Sheelin white chocolate is amazing. Rum and Kahlua and milk is kind of amazing too. I'm on an alcoholic milk kick; sue me.
I randomly decide to switch branches and start the migration script again, because why not? I'm not doing anything else anyway. and while I'm at it, I randomly Slack again.
Hey, Product dude messaged me. He's totally confused as to what i want, and says "All I created was {exact thing i fucking asked for}". sfjaskfj. He asks for the current images so he can "noodle" on it and ofc realize that they're the same fucking things, and that all he needs to provide is the new "hero" banner. Just like I asked him for. whatever. I comply and send him the archive. he's offline for the night, and won't have the images "compiled" until tomorrow anyway. Back to drinking.
But before then, what about that migration I started? I check on it. it's fucking frozen. Because of course it fucking is.
I HAD FIFTEEN MINUTES OF FUCKING WORK TODAY, AND I WOULD BE DONE FOR NEARLY THREE FUCKING WEEKS.
UGH!6 -
At 20 I thought my life would be an adventure. At 30 it seems like it's a rerun.
The reality is that life is full of grey areas, "good guys and bad guys" on all sides of most issues, and the story and excitement eventually end.
sometimes getting old feels like becoming comfortable with being numb and mediocre.
you are not the star at the center of your own story.
there is no story. there is only today, and then tomorrow, and then the day after that for as long as they happen to go on.
I can see no greater meaning or purpose behind this circus.
people think in months, seasons, years. maybe some of you even have five year plans.
but for me, rome was yesterday. and every rome to come. thats how near it is. It is so close, it and so many times before and after it, I cannot explain the sensation.
and in the vast gulfs of time, I see the wars, the conflicts, the narratives, and they unfold like dust or scum swirling on a pond, mechanistic, telling stories about nothing, algae struggling over territory on a rock.
as clearly as day, I see it all.
I saw your birth, and I saw your death. Your pain, and your greatest joy. How is it possible to love a total stranger and know them intimately because of their shared humanity? And still.
And from afar, in the stillness, I can't help being detached from the world and its problems.
And when we die, it is as if the world dies with us. Because it is not the end of the world, but the death of our own.
Softly go mortals, gently to their gods, like flowers in the fading summer. Never grasping that the permanence of the true identity and the temporality of the spirit are as fundementally distinct as the permanence of say "the G note", against the brief sound it makes when touched.
Eh. forget it. Sentimentality is a curse sometimes.10 -
The problem I have with atom, vscode, sublime, and notepad++ is that none are available on the command line over SSH, inside tmux. And that's where I do the vast majority of my text editing.
The first text editor I used on the command line was pico, the technological successor of which is nano. I used it because when I was in college in the late '90s, we used pine for our email, and pico was the default editor for pine.
When I got my first job out of college in 2000, I found out about vi, and very quickly fell in love with it, and its technological successor: vim.
The only reason I've never gotten into emacs is because I've never wanted for more than vi/vim. And also because as a system administrator, I'm logging into dozens, of not hundreds of servers a day. While vi or vim is guaranteed to be on all of them, emacs is not.
So, for me, the use of a desktop text editor like the ones I mentioned at the beginning of this post, just doesn't make sense to me. I almost never edit files that live on the computer where I'm sitting, and I'm not interested in doing a commit/push every single time I want to rerun a script.20 -
Earlier this day, I was about to start a new project. So I copied my favourite gulpfile.js into that projects root and installed all dependencies with npm. After running Gulp for the first time it threw an error.
Silly me tried to fix stuff and got googling the error and trying random things... After a break of a few hours I just fucking rerun Gulp and read the fucking error completely. It stood there. The fucking solution just stood there, run "npm blah --force" to reconfigure package blah....
Of course it worked right away and I finally could start working. But this shit took way too long. Why I just can't read the fucking error message. Damn -
This comic strip is a rerun and was published first time in 2008, when internet data collection was starting to get attention. And where are we today, eleven years later...?
http://ars.userfriendly.org/cartoon...1 -
>compiling a toolchain for my phone
>compiling gcc
>segfault
wtf, i have like 8GB RAM and 32GB Swap on an SSD
>rerun make w/o clean
>continues, no segfault
ok?
>segfault a few minutes later
FUCK
rinse and repeat like 30 times
why10 -
** me setting up GitLab CI **
- run pipeline
- FAIL
- env variable not passed to one of the shell scripts
- set -x, rerun
- FAIL
- same reason. env variable is OK in the `set -x` output
- comment out `set -x`, rerun
- still FAIL
- same reason
- find a `set +x` left in one of the scripts
- comment that out
- rerun
- PASS
- WTF?!?!?!?
- continue on swearing for wasted better half of the day debugging my scripts12 -
I have 2 juniors working under me that i need to assist with work. I dont mind helping at all because i was in the same boat. The problem is.. 1 of the developers asks questionsnon the last minute (a few hours before demo of weeks sprint) telling me she doesnt understand and i spend all week asking her, if shes okay, does she understand, does she need help, is the work too much, should we take a few hours to rerun through things and even while explaining things after planning, she just says "yes" and "i understand" and has the body language of "i want to get away here" ans doesnt even let me finish my sentances before interrupting mentonsau "yes" or something in that line to end the conversation. I dont know what to do because its going to start affecting my work and the ammount of work i can take for the week because i have to help her do the work on the last day and finish it just so she can look like the sprint was successful.
Any suggestions to help me help her? I really want to see her succeed but i can tell she isnt taking it as serious as she should or putting in as much as she likes because our company is very flexible woth everything and i don't want to get a project manager vibe around her5 -
Here some more information to despise Apple.
In the past few weeks I keep having a problem making the iMac connect to whatever website/host, so I had to rerun whatever I had to do: fetching from github, push to github, connecting to a LAN server, pinging to know to IP, accessing a webpage and so on.
Luckily enough, browsers tend to request again if an error occurs.
At my job, I upload app files to servers, like GooglePlay and AppStoreConnect.
For those who don't know, Google makes you upload the app through the browser (among other ways) while Apple requires you to upload your app either through XCode. No other possible ways.
Whenever XCode requires an update, the authentication is required, but the authentication server cannot be reached for at least 5-6 tries.
Then I have to upload the app and just to be ready to hit the "upload button" it takes like 3-4 minutes, which might be completely useless if a network error occurs.
How hard is it to make your fucking app-loader to try again at least a few times?6 -
After learning a bit about alife I was able to write
another one. It took some false starts
to understand the problem, but afterward I was able to refactor the problem into a sort of alife that measured and carefully tweaked various variables in the simulator, as the algorithm
explored the paramater space. After a few hours of letting the thing run, it successfully returned a remainder of zero on 41.4% of semiprimes tested.
This is the bad boy right here:
tracks[14]
[15, 2731, 52, 144, 41.4]
As they say, "he ain't there yet, but he got the spirit."
A 'track' here is just a collection of critical values and a fitness score that was found given a few million runs. These variables are used as input to a factoring algorithm, attempting to factor
any number you give it. These parameters tune or configure the algorithm to try slightly different things. After some trial runs, the results are stored in the last entry in the list, and the whole process is repeated with slightly different numbers, ones that have been modified
and mutated so we can explore the space of possible parameters.
Naturally this is a bit of a hodgepodge, but the critical thing is that for each configuration of numbers representing a track (and its results), I chose the lowest fitness of three runs.
Meaning hypothetically theres room for improvement with a tweak of the core algorithm, or even modifications or mutations to the
track variables. I have no clue if this scales up to very large semiprime products, so that would be one of the next steps to test.
Fitness also doesn't account for return speed. Some of these may have a lower overall fitness, but might in fact have a lower basis
(the value of 'i' that needs to be found in order for the algorithm to return rem%a == 0) for correctly factoring a semiprime.
The key thing here is that because all the entries generated here are dependent on in an outer loop that specifies [i] must never be greater than a/4 (for whatever the lowest factor generated in this run is), we can potentially push down the value of i further with some modification.
The entire exercise took 2.1735 billion iterations (3-4 hours, wasn't paying attention) to find this particular configuration of variables for the current algorithm, but as before, I suspect I can probably push the fitness value (percentage of semiprimes covered) higher, either with a few
additional parameters, or a modification of the algorithm itself (with a necessary rerun to find another track of equivalent or greater fitness).
I'm starting to bump up to the limit of my resources, I keep hitting the ceiling in my RAD-style write->test->repeat development loop.
I'm primarily using the limited number of identities I know, my gut intuition, combine with looking at the numbers themselves, to deduce relationships as I improve these and other algorithms, instead of relying strictly on memorizing identities like most mathematicians do.
I'm thinking if I want to keep that rapid write->eval loop I'm gonna have to upgrade, or go to a server environment to keep things snappy.
I did find that "jiggling" the parameters after each trial helped to explore the parameter
space better, so I wrote some methods to do just that. But what I wouldn't mind doing
is taking this a bit of a step further, and writing some code to optimize the variables
of the jiggle method itself, by automating the observation of real-time track fitness,
and discarding those changes that lead to the system tending to find tracks with lower fitness.
I'd also like to break up the entire regime into a training vs test set, but for now
the results are pretty promising.
I knew if I kept researching I'd likely find extensions like this. Of course tested on
billions of semiprimes, instead of simply millions, or tested on very large semiprimes, the
effect might disappear, though the more i've tested, and the larger the numbers I've given it,
the more the effect has become prevalent.
Hitko suggested in the earlier thread, based on a simplification, that the original algorithm
was a tautology, but something told me for a change that I got one correct. Without that initial challenge I might have chalked this up to another false start instead of pushing through and making further breakthroughs.
I'd also like to thank all those who followed along, helped, or cheered on the madness:
In no particular order ,demolishun, scor, root, iiii, karlisk, netikras, fast-nop, hazarth, chonky-quiche, Midnight-shcode, nanobot, c0d4, jilano, kescherrant, electrineer, nomad,
vintprox, sariel, lensflare, jeeper.
The original write up for the ideas behind the concept can be found at:
https://devrant.com/rants/7650612/...
If I left your name out, you better speak up, theres only so many invitations to the orgy.
Firecode already says we're past max capacity!5 -
-$ gulp test
*30 seconds later*
SUCCESS
[oh wait, for got something... Typety type... Fixed. I don't need to rerun gulp test, right?]
-$ git push
*email from CircleCI: BUILD FAILED*
😊🔫 -
AI based epidemiologist with a simulator support to rerun it's experiment. It can identify trends in epidemic arrivals and provide solutions to stop it. Advantage will be faster and safer experiments, which is now done manually.
-
To use Unity with VS you have to get Unity Build Tools as a plugin.
Alright, I'll download that.
Oh but now there's an error with connecting to unity, I need to get a newer VS and switch to the 2018 version of the engine.
Ok fine that's annoying but I guess I might as well upgrade.
Oh now there's no Intellisense? I guess I need to reload my project.
Oh what's this? Some major build error due to a missing component from Vs 2015?
This is getting stupid, fine let me install it.
Oh but to install the component you need to rerun the installer for VS, fine I'll redownload that.
Oh but apparently the installer _I JUST DOWNLOADED A FEW SECONDS AGO_ is outdated and needs to be upgraded. I can't _not_ update the installer and still install the components because that would be stupid, why would we let the developer decide what versions to use obviously they don't know what they're doing I mean it's not like they know how to use computers?
To get simple code completion, let's force developers to download an installer that then needs to be updated to install a component for this giant IDE that also requires the 2015 version of the IDE to be installed alongside a special plugin and patch designed for a specific game engine.
All this. For fucking code completion. I can't even get Intellisense to work in VSCode without fixing the issue since the C# extension in VSCode just binds to Visual Studio tools and runs the same shit with a different GUI.10 -
They say that runing the same command over and over again is a sign of insanity.
LIKE HELL IT IS!!!
I've been running `terraform apply` for the last hour (trying to dump an EKS token in plain-text, because my k8s-related providers failed to auth to the cluster), and miraculously the problem went away. Now the error is no more.
Insanity?
I beg to differ!
Narf!3 -
I take the day off for a dr appointment cause I know shots make me tired and I won't work well after
But..... my coworker breaks a super important batch script by not reading a pop up note on a recent fix and (temporary) manually needed adjustment that pauses the script until you press a button
Then proceeds to skip all THREE places across the process to catch the problem caused by that not reading the note
And finally sees an issue AFTER final version is already sent out to clients....
So as soon as I get home I need to log on and rerun the process taking my time to read the check spots to make sure values and counts are correct and a new file is sent out
It feels great to take a chunk of my day off to cover a mistake of someone else's
Also should note I'm salaried. So I don't get paid extra for logging on and fixing this on my day off. Kinda sucks but whatever3 -
Going back to a php project after writing loads of typescript on a node stack, I suddenly miss the instantanious feedback loop on file save via `nodemon` for basic scripts and `mocha --watch --reporter min` for tests.
Using phpunit, I currently have to rerun the test manually whenever I feel like. Which now feels so annoying. Cause I didn't know besser.
Now I was searching for something similar in php and I find answers[1] pointing me to use either set up some npm hooks or set up gulp task or to use pywatch. phpstorm also is supposed to support file watchers and run test on every save, yet setting them up feels clunky.
[1] http://stackoverflow.com/questions/...1 -
We have all run the same code although it didn't work the first time. And it never worked the second.😂😂
-
The lack of appreciation (from the user/management side) as a backend developer and DevOp is frustrating sometimes. But having nice colleagues who value your work makes it worthwhile.
-
New project, make a simple change, a load of tests fail, stash changes to see if they ever passed, rerun tests: they pass ... rubbish must have been something i did. unstash changes, rerun tests to check the details: they pass ... walk away slowly
-
I would like to config a service on Windows to make it rerun a .exe and I ended deleting and adding keys to the Windows registry...
Now everything is broken ans I'm here looking at my computer burning down.
If anyone ever wanted to create a session with a unique application available on it (since this session would be used in a public area) please let me know you've been at this point too, when you wanted to break a wall ! -
It's been a good month where honestly I had nothing to rant about. Pretty much doing my own project setting up ELK.
But last few days I had to return to the reality called teammates....
It where it ok... I mentored one of them, then did the code review yesterday
And that's when the shit hit the fan.
I told them to do X but then they did Y instead thinking that they were smart.
In hindsight they seem to have no idea wtf they were doing, inexperienced and couldn't even use console.log and JSON.stringify to debug object states...
Which course now reminded what's wrong with this team, you got people jumping around stacks and projects so they're all mediocre on all of them. Rather than having specific people being good at one of them (aka more experienced than a noob).
And if course this morning, manager asked me to look into something on a program I haven't support in a while (there are a free people that are more experienced and know the current state better). And he said this is quick and urgent... And actually when he said that I'm like uh.... don't think so....
And last thing is we had to rerun a report in production so needed the shipper ten to do it. Asked them look yesterday, users were waiting.
Today... Still not done. And well I actually can run the report myself locally.. takes 5mins but in production they need to reload the data but that should take at most 20mins... Either way... Nothing was done.
Oh and I just remembered I raised a request to it SA group to have some not script installed... That not done either.
And this is why relying on others it at least these people is a bad idea..... Unless your are capable of firing them... -
Run code
System flag set.
Code crashes
System flag not removed
Fixed code
Code won't rerun because system flag2 -
difficulty in reproducing bug behavior
think i found a solution to prevent the infinite requests being sent by a react component who's code keeps getting rerun by react's scheduler.development.js
i have no fucking clue what's going on, on top of not having any foundation in react3 -
I am trying to make a simple news app using jetpack compose but it throwing a error(
when I rerun the app it runs successfully but when I logged out and try to log in it throw the error)