Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "timestamp"
-
An entirely typical exchange at work:
PM: How long would it take to build an application that collates Gubblefluffs and exports them as a PDF?
ME: Hard to say. What’s a Gubblefluff?
PM: Nothing complex. Its basically an object with some stuff in.
ME: Erm, okay. So I’ll define a Gubblefluff object plus methods to add edit and delete, then for each Gubblefluff have it write a line to a PDF.
PM: It will need to email that PDF to somebody.
ME: Okay, cool. “Gubblefluffs-by-email” should take about a day.
6 hours later…
ME: I’ve done Gubblefluffs-to-pdf, I’m not clear on what’s in a Gubblefluff but I’ve made it flexible so it can take almost anything.
PM: No, a Gubblefluff can ONLY be one of 4 Snigglefingers plus a timestamp and some JSON.
ME: What? Right. Okay. What’s a Snigglefinger?
PM: (sighs) A Snigglefinger is the collection of relevant Babelsets.
ME: Babelsets?
PM: Yeah, a user can have any number of Babelsets but they must correspond to one of the four types of Snigglefingers.
ME: There are users!?
PM: Of course!
ME: But I’ve not coded anything for users.
PM: Shit. I’ve told the client they can have it today. How long to add in users?
ME: And Babelsets, and Snigglefingers and the new Gubblefluff rules?
PM: Yeah.
6 days later…
ME: This is done now. It’s a beast but it works. Who should it email the PDFs to?
PM: Client X, plus cc to Y and bcc to Z.
ME: What? It doesn't support CC and BCC!
1 hour later…
ME: This is done. I’ve tested it and sent you a copy of the PDF it generates.
PM: Okay thanks. Is the cron running daily?
ME: What cron?
…
ME: Okay, so the cron’s running once a day at 8pm.
PM: Oh, it’ll need to be at 3:15pm. That’s when we’ve told the client they’ll get it.
ME: Right. I’ll change it...
PM: Also, the PDF you sent me looks nothing like the visual.
ME: What visual?
...53 -
Client: "Do you think we could finish specs in week 33, see a demo in week 35, and aim for the product to be finished in week 39?"
I jump on the conference room table, rip the shirt off my sweaty chest, and yell:
"WEEKS OF WHAT? 31 WEEKS SINCE YOU BECAME A CLIENT, 35 WEEKS FROM NOW, 39 WEEKS INTO THE PREGNANCY? BLOODY FUCKING HELL MAN, DO YOU HAVE TO TALK LIKE A RETARD?"
Client, unfazed: "Weeks since the start of the year, sir"
Me, swinging my pants above my head like a lasso:
"WHAT THE FUCK KIND OF SNOWFLAKE ARE YOU, YOU REALLY EXPECT ME TO COUNT THE WEEKS SINCE THE START OF THE YEAR? WHAT ABOUT JUST USING DAY OF THE MONTH YOU OBNOXIOUS DIMWIT?"
Client: "We always use weeks at our company to plan things"
Me, winding the legs of my pants around the neck of the client:
"I HATE IT WHEN PEOPLE USE WEEKNUMBERS, JAKE. I. FUCKING. HATE. IT."
Client, still pretending everything is fine: "If you want I could send you a screenshot of my outlook calendar?"
Me, sitting in underpants on the client's back, sweaty legs wrapped around his waist, trying to pull out his gel-infested manager-hair while strangling him with my pants:
"TIME OF DEATH, UNIX TIMESTAMP 1595240810, ISO 8601 DATE 2020-07-20T10:26:50+00:00. ANOTHER PROJECT SUCCESSFULLY WRAPPED UP"
(parts of this story may have been dramatized to reflect my underlying emotions)30 -
IntelliJ IDEA just saved my ass!
I tried deleting a resource file I had staged, but not commited yet.
A dialog comes up asking to delete alternative configs with "Yes" as the default.
Boom! After I braindead hit the enter key all other files vanished too!
I checked Git and saw to my horror that the files were also not tracked anymore.
I hastly lookup the last backup timestamp - an hour ago - fuuuuu!
I just lost about an hour of work.
I was about to give up and start from scratch when I look at the edit menu in my IDE.
Turns out you can actually undo multiple file deletions!
Kudos to the girls and boys at JetBrains! You saved the day! 😙8 -
So today (or a day ago or whatever), Pavel Durov attacked Signal by saying that he wouldn't be surprised if a backdoor would be discovered in Signal because it's partially funded by the US government (or, some part of the us govt).
Let's break down why this is utter bullshit.
First, he wouldn't be surprised if a backdoor would be discovered 'within 5 years from now'.
- Teeny tiny little detail: THE FUCKING APP IS OPEN SOURCE. So yeah sure, go look through the code! Good idea! You might actually learn something from it as your own crypto seems to be broken! (for the record, I never said anything about telegram not being open source as it is)
sources:
http://cryptofails.com/post/...
http://theregister.co.uk/2015/11/...
https://security.stackexchange.com/...
- The server side code is closed (of signal and telegram both). Well, if your app is open source, enrolled with one of the strongest cryptographic protocols in the world and has been audited, then even if the server gets compromised, the hackers are still nowhere.
- Metadata. Signal saves the following and ONLY the following: timestamp of registration, timestamp of the last connection with the server (both rounded to the day so not on the second), your phone number and your contact details (if you authorize it) (only phone numbers) in HASHED (BCrypt I thought?) format.
There have been multiple telegram metadata leaks and it's pretty known that it saves way more than neccesary.
So, before you start judging an app which is open, uses one of the best crypto protocols in the world while you use your own homegrown horribly insecure protocol AND actually tries its best to save the least possible, maybe try to fix your own shit!
*gets ready for heavy criticism*19 -
CHILD: But how can Santa deliver toys to every little boy and girl on his list in one night?
MEH: (laughs) It's quite simple. The items on Santa's list are called blocks, and each block in his "blockchain" typically contains a hash pointer, a timestamp, and transaction data...6 -
I've always found those "age++" rants to be annoying.
are you people storing age as an integer rather than as an epoch timestamp?! seems rather tedious to upkeep.
either way. another year down! (that's 31,536,000 seconds for those of you counting correctly.)6 -
Our front-end team is pissed at me because they found a feature I pushed last month and claimed they didn't know it existed and how can they help clients if they don't know what's going on with the platform and - I pointed them to the Pivotal story of the feature that they were included in as well as the full changelog announcement I made to the entire team the day I pushed it.
Knowledge is power, especially when it's in writing with a timestamp.5 -
Just spent 15 minutes trying to explain Unix time and Y2K to a liberal arts major who wanted to know why 2038 is such a huge deal. It was technical, frustrating, and challenging. Kinda like debugging3
-
Well, if your tests fails because it expects 1557525600000 instead of 1557532800000 for a date it tells you exactly: NOTHING.
Unix timestamp have their point, yet in some cases human readability is a feature. So why the fuck don't you display them not in a human readable format?
Now if you'd see:
2019-05-10T22:00:00+00:00
vs the expected
2019-05-11T00:00:00+00:00
you'd know right away that the first date is wrong by an offset of 2 hours because somebody fucked up timezones and wasn't using a UTC calculation.
So even if want your code to rely on timestamps, at least visualize your failures in a human readable way. (In most cases I argue that keeping dates as an iso string would be JUST FUCKING FINE performance-wise.)
Why do have me parse numbers? Show me the meaningful data.
Timestamps are for computers, dates are for humans.3 -
noob misconception #527: during my first hackathon i didn't know what version control was (i thought github was this magical elite hacker tool), so id copy my code into a google doc every few minutes along with a timestamp 😫6
-
What features would you want in a logger?
Here's what I'm planning so far:
- Tagged entries for easy scanning of log file
- Support for indenting to group similar sequential entries
- Multiple entry types (normal, info, event, warning, error, fatal, debug, verbose)
- Meta entries, so the logger logging about itself, e.g. disk i/o failures.
- Ability to add custom entry types, including tag, log-level, etc.
- Customizable timestamp function
- Support for JS's async nature -- this equates to passing a unique key per 'thread'; the logger will re-write all the parent blocks for context, if necessary. if that sounds confusing, it's okay; just trust that it makes sense.
- Caching, retries, etc. in the event of disk i/o issues.
- Support for custom writers, allowing you to e.g. write logs to an API rather than console or disk.
How about these features?
- Multiple (named) logs with separate writers (console, disk, etc.)
- Ability to individually enable/disable writing of specific entry types. (want verbose but not info? sure thing, weirdo!)
- Multiple writers per log. Combined with the above, this would allow you to write specific entry types (e.g. error, warning, fatal) to stderr instead of stdout, or to different apis.
- Ability to write the same log entry to multiple logs simultaneously
What do you think of these features?
What other features would you want?
I'm open to suggestions!18 -
Today was a manic-depressive kind of day. Spent the morning helping some developers with getting their code to run a stored procedure to drop old partitions, but it wasn't working on their end. It was a fairly simple proc. But working with partitions is a little like working with an array. I figured out that they were passing the wrong timestamp, and needed to add +1 to delete the right partition. Got that sorted out, and things were good. Lunch time.
After lunch I did some busy work, and then the PO comes up at about 2PM and says he's assigned some requests to me. The first was just attaching some scripts. Easy. The second, the user wants a couple of schemas exported ... at 6PM. I've been in the office since 6:45AM.
While I'm setting up some commands to run for the data export, a BA walks up and asks if I'm filling in for another DBA who is out for a few weeks. Yep. There's a change request that hasn't been assigned, and he normally does the work. I ask when it's due. Well, the pre-implementation was supposed to be done in the morning, but it wasn't, and we're in the implementation window ... half way through. I bring up the change task, and look at. Create new schema and users. That's all it says. The BA laughs. I tell I need more to go on. 10 minutes later he sends an email with the information. There's only two hours left in the window, and I can only use half of it, because the production guys have to their stuff, and we're in their window. Now I'm irritated, because I'm new to Oracle, and it's an unforgiving mistress. Fortunately, another DBA says he'll do it, so that we can get it done in time. But can't work it either, because Dev DBAs don't have access to QA, and the process required access for this task. Gets shelved until the access issue is resolved. It's now after 4:15PM. I'm going to in traffic with that 6PM deadline.
I manage to get home and to the computer by 5:45PM. Log in. Start VPN. Box pops on screen. Java needs to update. I chose skip update. Box pops up again. It won't let me log in until Java is current. Passed.
I finally get logged in, and it's 6:10PM. I'm late getting the job started. I pull up Putty and log into the first box, and paste my pre-prepared command in the command line and hit error. Command not found. I'm tired, so it's a moment to sink in. I don't have time for this.
I log into DBArtisan and pull up the first data base, use the wizard to set the job, and off it goes. Yay. Bring up the second database, and have enter the connect info. Host not found. Wut? Examine host name. Yep, it's correct. Try a different method. Host not found. Go back to Putty. Log in. Past string. Launch. Command not found. Now my brain is quitting on me. Why now? It's after 6:30PM. Fiddle with some settings, reset $Oracle home. Try again. Yay. It works. I'm done. It's after 7PM.
There is nothing like technology to snatch the euphoria of a success away from you. It's a love-hate thing, but I wouldn't trade it for anything else. I'm done. Good night.3 -
Still dealing with the web department and their finger pointing after several thousand errors logged.
SeniorWebDev: “Looks like there were 250 database timeout errors at 11:02AM. DBAs might want to take a look.”
I look at the actual exceptions being logged (bulk of the over 1,600 logged errors)..
“Object reference not set to an instance of an object.”
Then I looked the email timestamp…11:00AM. We received the email notification *before* the database timeout errors occurred.
I gather some facts…when the exceptions started, when they ended, and used the stack trace to find the code not checking for null (maybe 10 minutes of junior dev detective work). Send the data to the ‘powers that be’ and carried on with my daily tasks.
I attached what I found (not the actual code, it was changed to protect the innocent)
Couple of hours later another WebDev replied…
WebDev: “These errors look like a database connectivity issue between the web site and the saleitem data service. Appears the logging framework doesn’t allow us to log any information about the database connection.”
FRACK!!...that Fracking lying piece of frack! Our team is responsible for the logging framework. I was typing up my response (having to calm down) then about a minute later the head DBA replies …
DBA: “Do you have any evidence of this? Our logs show no connectivity issues. The logging framework does have the ability to log an extensive amount of data regarding the database transaction. Database name, server, login, command text, and parameter values. Everything we need to troubleshoot. This is the link to the documentation …. If you implement the one line of code to gather the data, it will go a long way in helping us debug performance and connectivity issue. Thank you.”
DBA sends me a skype message “You’re welcome :)”
Ahh..nice to see someone else fed up with their lying bull...stuff. -
Sleeping the Thread for 1 sec, because the database had no real timestamp and a transaction on the same item within the same second would lead to a doubled primary key...
No real feature, but it is a bug and this makes it a feature I guess.1 -
Want to make someone's life a misery? Here's how.
Don't base your tech stack on any prior knowledge or what's relevant to the problem.
Instead design it around all the latest trends and badges you want to put on your resume because they're frequent key words on job postings.
Once your data goes in, you'll never get it out again. At best you'll be teased with little crumbs of data but never the whole.
I know, here's a genius idea, instead of putting data into a normal data base then using a cache, lets put it all into the cache and by the way it's a volatile cache.
Here's an idea. For something as simple as a single log lets make it use a queue that goes into a queue that goes into another queue that goes into another queue all of which are black boxes. No rhyme of reason, queues are all the rage.
Have you tried: Lets use a new fangled tangle, trust me it's safe, INSERT BIG NAME HERE uses it.
Finally it all gets flushed down into this subterranean cunt of a sewerage system and good luck getting it all out again. It's like hell except it's all shitty instead of all fiery.
All I want is to export one table, a simple log table with a few GB to CSV or heck whatever generic format it supports, that's it.
So I run the export table to file command and off it goes only less than a minute later for timeout commands to start piling up until it aborts. WTF. So then I set the most obvious timeout setting in the client, no change, then another timeout setting on the client, no change, then i try to put it in the client configuration file, no change, then I set the timeout on the export query, no change, then finally I bump the timeouts in the server config, no change, then I find someone has downloaded it from both tucows and apt, but they're using the tucows version so its real config is in /dev/database.xml (don't even ask). I increase that from seconds to a minute, it's still timing out after a minute.
In the end I have to make my own and this involves working out how to parse non-standard binary formatted data structures. It's the umpteenth time I have had to do this.
These aren't some no name solutions and it really terrifies me. All this is doing is taking some access logs, store them in one place then index by timestamp. These things are all meant to be blazing fast but grep is often faster. How the hell is such a trivial thing turned into a series of one nightmare after another? Things that should take a few minutes take days of screwing around. I don't have access logs any more because I can't access them anymore.
The terror of this isn't that it's so awful, it's that all the little kiddies doing all this jazz for the first time and using all these shit wipe buzzword driven approaches have no fucking clue it's not meant to be this difficult. I'm replacing entire tens of thousands to million line enterprise systems with a few hundred lines of code that's faster, more reliable and better in virtually every measurable way time and time again.
This is constant. It's not one offender, it's not one project, it's not one company, it's not one developer, it's the industry standard. It's all over open source software and all over dev shops. Everything is exponentially becoming more bloated and difficult than it needs to be. I'm seeing people pull up a hundred cloud instances for things that'll be happy at home with a few minutes to a week's optimisation efforts. Queries that are N*N and only take a few minutes to turn to LOG(N) but instead people renting out a fucking off huge ass SQL cluster instead that not only costs gobs of money but takes a ton of time maintaining and configuring which isn't going to be done right either.
I think most people are bullshitting when they say they have impostor syndrome but when the trend in technology is to make every fucking little trivial thing a thousand times more complex than it has to be I can see how they'd feel that way. There's so bloody much you need to do that you don't need to do these days that you either can't get anything done right or the smallest thing takes an age.
I have no idea why some people put up with some of these appliances. If you bought a dish washer that made washing dishes even harder than it was before you'd return it to the store.
Every time I see the terms enterprise, fast, big data, scalable, cloud or anything of the like I bang my head on the table. One of these days I'm going to lose my fucking tits.10 -
Earlier this year I had to deploy an "emergency" fix to production for (luckily) an internal facing, but customer impacting, web application.
It was only the login page they were changing. I backed up the original, copied the new file into place, and marked my task complete.
Then I went and read the details on the incident. Someone discovered that if you supply ANY valid username and leave the password blank, you're in! Put the wrong password and you're blocked, of course. But blank? You must be legit!
Curious, I looked at the timestamp on the original file I had backed up to see how long it had been like this.
4 years.2 -
apparently with massively long videos, with no timestamp corruption or seek table fuckery, like ACTUALLY massively long videos, VLC dies somewere around the "several hundred hour" mark when trying to get the next frame.
is this a bug or did my hubris exceed VLC's tolerance?13 -
A couple of years ago, we decide to migrate our customer's data from one data center to another, this is the story of how it goes well.
The product was a Facebook canvas and mobile game with 200M users, that represent approximately 500Gibi of data to move stored in MySQL and Redis. The source was stored in Dallas, and the target was New York.
Because downtime is responsible for preventing users to spend their money on our "free" game, we decide to avoid it as much as possible.
In our MySQL main table (manually sharded 100 tables) , we had a modification TIMESTAMP column. We decide to use it to check if a user needs to be copied on the new database. The rest of the data consist of a savegame stored as gzipped JSON in a LONGBLOB column.
A program in Go has been developed to continuously track if a user's data needs to be copied again everytime progress has been made on its savegame. The process goes like this: First the JSON was unzipped to detect bot users with no progress that we simply drop, then data was exported in a custom binary file with fast compressed data to reduce the size of the file. Next, the exported file was copied using rsync to the new servers, and a second Go program do the import on the new MySQL instances.
The 1st loop takes 1 week to copy; the 2nd takes 1 day; a couple of hours for the 3rd, and so on. At the end, copying the latest versions of all the savegame takes roughly a couple of minutes.
On the Redis side, some data were cache that we knew can be dropped without impacting the user's experience. Others were big bunch of data and we simply SCAN each Redis instances and produces the same kind of custom binary files. The process was fast enough to launch it once during migration. It takes 15 minutes because we were able to parallelise across the 22 instances.
It takes 6 months of meticulous preparation. The D day, the process goes smoothly, but we shutdowns our service for one long hour because of a typo on a domain name.1 -
1. i'm drunk.
2. please do me a sanity check
3:
this video, at this timestamp, watch the following about 5 minutes or so:
https://youtu.be/oG-6Ltp1_yE?t=1129
4. tell me (and possibly him in comment) if i'm wrong in the (point) of the following comment i wrote under that video:
20:53 ARE YOU FUCKIN KIDDING ME YOU ABSOLUTE MORON?!
yes, US has an altitude software written in fuckin VBA with an explicit statement to ignore errors, and there's not about 10x more automated testing code for a critical piece of functionality, than there is of the code that handles the actual functionality, and it's not been tested off-line (in simulated environment) as well as on-line (IRL) for at least years in all conditions, before it was deployed, YOU ABSOLUTE FUCKING MORON.
CAN YOU JUST PLEASE FOR THE LOVE OF ALL THAT'S HOLY STICK TO WHAT YOU ACTUALLY PROPERLY UNDERSTAND?!
HOLY FUCK THE LEVEL OF ARROGANCE IN YOU IN ASSUMING THAT JUST BECAUSE YOU KNOW VBA YOU KNOW HOW PROPER SOFTWARE DEVELOPMENT IS DONE, HOLY FUCKING SHIT.
I've worked in companies of 1k employees and less, on absolutely non-critical stuff, that has DevOps and QA processes and infrastructure that would make your script kiddie head spin for WEEKS, LET ALONE FUCKIN MILITARY SW DRIVING MILITARY EQUIPMENT YOU ARROGANT KNOWITALL FUCK.
Please, just please, FOCUS ON FUCKING DOING VIDEOS ABOUT STUFF YOU FUCKING UNDERSTAND, instead of stuff your ego overinflated from years of debunking dunning-krugers tells you that you're an expert in despite never actually having worked even near those fields. PLEASE. You are amazing when doing those, but this bullshit is just fucking rage-inducing. Don't ever talk about software again, because that's obviously YOUR dunning-kruger area, you fuckin bigheaded script kiddie.12 -
How to NOT handle dates!
Do not put the date to "mdy" format, today would be 050217.
Working on an intranet for some multi-billion corporation, and was wondering why events from last year were showing.
Last year's events were 28th of October, 2016, and was showing as upcoming events :)
Checked the code, and saw this. Quick fix, turned dates to UNIX timestamp, and it worked.undefined maybe put it on the bill for the client my back hurts today will this get many ++ i definitely have to buy one love this not laughing at all dateformat date right now i don't know what to write how long can the tags be how many tags can i put?2 -
Ok, I know this has been said a thousand times before, but fuck localization code.
Especially when you have to determine which badly-formatted timestamp is chronologically first.
BLEARGGHHH1 -
A few years ago, i had a task to implement a webservice of an insurance-company into our .NET Development.
The security requirements of this insurance-company webservice were top notch.
As a client you had to build a request that used a negotiated certificate, canonical header structures, security timestamp, a secret token in header, ...
To configure all this stuff via web.config WCF was pure pain in the ass.
After many phonecalls and emails, i finally managed to meet all security requirements to send a valid request.
First, i didn't recognized my breakthrough, because my client still had thrown exceptions while calling the insurance-webservice.
Why was that?
The exception told me on the most possible gentle way, that .Net isn't able to process an unsecured response, when there was a secured request before.
So there was top notch security for requesting, but dumbass unsecured responding with all the precious customer information.
*epicfacepalmnuclearexplosionfollowing*
I even had to raise the. Net Version of our. Net client, because i wasn't able to configure that it is allowed to process an unsecured response after using a secured request.
Whyyyyyyy?!!?!!1el even!?! -
Best debugging trick ever:
Wear your fucking glasses while coding so that you do not mistake COMMA(,) with a DOT (.).
So by
1. Doing that (which obviously aren't a huge number) and
2. Cleaning my screen (yes that).
I was able to wrap my head around the issue that almost wasted one day.
So what I intended to pass as string concatenation join operation value actually was being passed as an argument to the underlying function (that wasn't taking care of it and returning a timestamp from thin air).
Murphy's Law in production and practice.
Nice!
Depressing music continues......!3 -
Client: I need you to clean up the database and remove all rows [with children] with a timestamp older than 5 years
Team: OK
Team [internally]: we definitely need a dba for this
Me: dba? Why? A junior dev can do that
Team: yyeeaahh, but still.. A DBA would do it better. You know, foreign keys and everything
Me: ....8 -
Someone's guts will be torn out tomorrow and put up on a nice clean razor barbed wire ...
I was wondering what the fucking fuck messed up my brain - till I realized that some dev mixed up the timezone on one of our servers. Dunno how the dev managed it - but the end result was not funny.
Due to the difference in time strings the newer backup had an older timestamp - and vice versa.
Which - when you want to do mass clean up and migration - is a very fucked up thing.
I had to manually check dozens of backups to make sure I got the right ones...
-.- knife goes in, gut goes out. Thx Bart Simpson.8 -
How do you folks handle your pagination at backend?
I was initially using skip and limit. But that is prone to error.
Then I thought about using last sent items id and using its created_at time and fetching rows/documents after that. But that too is prone to error ( more than one can have same timestamp) hence either having data repeats or data That gets missed.
What is your goto strategy for pagination?8 -
(Fyi: I was an intern.)
It took me hours of time to recognize, that one of the necessary tables just used an oracleDB Date as a timestamp, which lead to a violation of the primary key constraint on interactions taken place in less than one second.
Me: (explaining the problem to CW)
CW: "Yup, we know."
Me: ...2 -
semi dev related(later half)
A common and random thought I have:
A lot of units that humans use are either needlessly arbitrary or based on something weird. Like Fahrenheit. That shit is weird! 0°F is the freezing point of a water and salt solution. What a weird fucking thing to use!
But also, I like Fahrenheit more. Probably because it's what I was raised with and switching is tedious (though I'm trying. I'd like to use metric more), but also because one degree F is a smaller, more precise change. You can describe more accuracy without decimals.
On the other hand I prefer metric for length. Centimeters, and centimeters are way more precise and way less confusing than inches and .... 1/8th inches? Who the fuck decided on 1/8ths?!
Which brings me to my common thought:
If you look at a Unix timestamp, you can approximate somewhat when it happened. Knowing the current timestamp and a few reference points you can see RELATIVELY what a epoch stamp translates to. A few days ago, an hr ago, 2014ish.
This leads me to think that if we actually taught from a young age to think in epoch as a unit (not as a replacement to normal date formats but as a secondary at first) that we could just naturally read epoch time in the same manner we read dates like "28/01/2006 14:24:10 UTC"
In your brain you automatically know how old you were when that timestamp happened. What grade/job and where you lived at the time. What season it was. You know how far into the day it was, a little before lunch (or after or whatever, your time zone will vary). Now try with 1138458250. I can usually get roughly the year, and month if I really think about it, but that's it. And it takes much more effort
I'm sure there's other units we could benefit from but epoch is the one that usually brings this to mind for me.13 -
MS teams
- user activity status doesnt update properly
- your status stays as ONLINE even when 30 minutes afk or goes to AWAY after two minutes of being afk and stays that way after you started working again.
- status sometimes does not update in active chat window when person's status on the other side changes.
- sometimes, messages dont appear, until I click into the app and force it to update the status from away to active
- I/O
- One day everything works, suddenly next day your mike doesnt work. Then your audio is mute altogether. Or you suddenly start hearing yourself (echo). All without any configuration changes or restarting whatsoever.
- UI
- Happens so often... You get a new message in your active chat window and you have to SCROLL DOWN MANUALLY to see it!!!
- Coppied text from chat? HERE'S A TIMESTAMP AND A NAME OF THE SENDER AS WELL!!!!!
And Im not even mentioning the performance itself...
Srsly this app is horrendous2 -
The datepicker saga
Part one
So I begin work on a page where user add their details, project is late, taking ages on this page
Nearly done, just need a component to allow users to put in some date of births. Look for react components.
Avoiding that one because fuck Bootstrap.
Ah-ha, that looks good, let's give it a go.
CSS doesn't exist, oh need copy it over from npm dist. Great it applied but...
... WTF it's tiny. Thought it was a problem with my zoom. Nope found the issue in github.com and it's something to do with using REM rather than EM or something, okay someone provided a solution, rather I saw a couple of solutions, after some hacking around I got it working and pasted it in the right location and yes, it's a reasonable size now.
Only it's a bit crap because it only allows scrolling 1 month at a time. No good. Hunting through the docs reveals several options to add year and month drop downs and allow them to be scrolled. Still a bit shit as it only shows certain years, figure I'd set the start date position somewhere at the average.
Wait. The up button on the scroll doesn't even show, it's just a blank 5px button. Mouse scroll doesn't work
Fucking...
... Bailing on that.
Part 2
Okay sod it I'll just make my own three drop down select boxes, day, month and year. Easy.
At this point I take full responsibility and cannot blame any third party. And kids, take this as a lesson to plan out your code fully and make no assumptions on the simplicity of the problem.
For some reason (of which I regretted much) I decided to abstract things so much I made an array of three objects for each drop down. Containing the information to pretty much abstract away the field it was dealing with. This sort of meta programming really screwed with my head, I have lines like the following:
[...].map(optionGroup =>
optionGroup.options[
parseInt(
newState[optionGroup.momentId]
, 10)
]
)...
But I was in too deep and had to weave my way through this kind of abstract process like an intrepid explorer chopping through a rain forest with a butter knife.
So I am using React and Redux, decided it was overkill to use Redux to control each field. Only trouble is of course when the user clicks one of the fields, it doesn't make sense in redux to have one of the three fields selected. And I wanted to show the field title as the first option. So I went against good practice and used state to keep track of the fields before they are handed off to the parent/redux. What a nightmare that was.
Possibly the most challenging part was matching my indices with moment.js to get the UI working right, it was such a meta mess when it just shouldn't have taken so stupidly long.
But, I begin to see the light at the end of this tunnel, it's slowly coming together. And when it all clicks into place I sit back and actually quite enjoy my abysmal attempt at clean and easy to read code.
Part 3
Ran the generated timestamp through a converter and I get the day before, oh yeah that's great
Seems like it's dependant on the timezone??!
Nope. Deploying. Bye. I no longer care if daylight savings makes you a day younger.1 -
How do you approach generating "random" unique numbers/strings ? Exactly, when you have to be sure the generated stuff is unique overtime? Eg. as few collisions in future as possible.
Now I don't mean UUIDs but when there is a functionality that needs some length defined, symbol specific and definitely unique data, every time it does it's stuff.
TLDR STORY: Generating 8 digits long numbers so they are (deterministically - wink wink) unique is hard but Format Preserving Encryption saves the day. (for me)
FULL STORY:
I had to deal with both strings and codes today.
One was to generate shortlink word for url, luckily found a library that does exactly this. (Hashids)
BUT generating 8 digits long, somewhat random number was harder then I thought, found out on SO something like "sha256(seed) => bytes => ascii/numbers mangling" but that had a lot of collisions because of how the hash got mangled to actually output numbers and also to fit the length.
After some hours I stumbled upon Format Preserving encryption (pyffx) and man it did what I wanted and it had max 2 collisions in 100k values. Still the solution with this feels hacky af. (encrypting straddled unix timestamp with lots of decimals)6 -
i recently realised that youtube is the single most addictive app for me.
- it has reels that doesn't impact your usual video. reels is already a very addictive feature, but having this ability to watch many 1 min videos without losing my current video's timestamp, the search feed, the history and the home feed, it makes a great way to spend 1 hour on a 10 mins video
- it's AI is world class and recommends videos/channels that are full of content that i would watch
- it has a butt load of content.
- vanced/ ad blockers makes it possible to watch videos without ads, so makes the whole experience more grappling.
i spent 3-4 hours on it each day and another 2-3 hours during work. when it's not open as a tab on laptop, its open in my mobile.
youtube feels like a very nasty, evil product as i realise all this.
do you people feel the same about youtube? any detox tips?9 -
What happens on Friday, 11 April 2262 23:47:16.855, to the Unix timestamp? It arrives to the maximum value8
-
There is one problem common in all programming languages: THE DATE! Why don't we use timestamp for all???1
-
So I had this JSON thingy, where I named the property containing a datetime string "timestamp".
For some reason, JS decided to convert that into a unix timestamp int on parse. Thx for nothing.6 -
So we have outsourced one of systems (i dodnt had enough time to do it myself)
I know, i know, i could finish there.
We all know that there is a possibility if you need to add timestamp to table to know when row was created. Its worth mentioning its table used to count money.
So we have that thing, vouchers, and of course they have expiry date on them.
Orginal authors vanished (bielarus or how its in english, they have ongoing shitstorm, so i understand) and I needed to make a small adjustment.
So ya all would expect that field 'created_at' which defaults to current_timestamp() would be... Well current timestamp, of creation of record, right? Riiiiight?
WRONG.
Their hacky solutions INC decided its great idea to make that date of expiry, and current timestamp on use.
Becouee fuck logic and clarity.3 -
Linux is great
Linux can be customized
Welp, not so much. Simple things are not possible to do.
After 1H research I can't have timestamp near each line over SSH.
I'm not talking "history", i'm talking live.
basiclly I want to see timestamp on every single output line.
Basiclly this :
https://unix.stackexchange.com/ques...
But running automaticly for each command ever.
To be fair, same problem exists in PowerShell on windows
https://stackoverflow.com/questions...
And no, I don;'t want to switch to some random halfbaked teminal.31 -
Grrrr
I love JS, but I hate browsers.
Universal ES5 way to initialize a date from a input value in "dd.mm.YYYY" format:
var split = input.value.split('.');
var from = {};
from.day = parseInt(split[0]);
from.month = parseInt(split[1])-1;
from.year = parseInt(split[2]);
var myDate = new Date(from.year, from.month, from.day);
// if a timestamp format is needed:
var myDateTimestamp = +new Date(from.year, from.month, from.day);
No, I won't use moment.js or other bloat-braries just for fucking dates.1 -
! Suggestion
So I've one project based on fingerprint scanner where the scanner came with sdk for c# and other language libraries.
So basically the user punches for login and logout I'm storing timestamp based on that to MongoDB. Now the only concern is when the user punches, it doesn't give any response like sound or light for telling it's accepted or not.
For that I've to do something so my guess was
1. Sound
2. Light
But, it's for library and they don't want sound. And my scanner don't have any extra light for that.
Anyone got any suggestion or cool idea?
(I'm using Nitgen Fingkey Hamster I DX HFDU06)4 -
Back again to the horrow show.
We start with the integration. It’s a new project, let’s see how it works. First step: authentication. From the documentation it claims to be an oAuth2. Wait..why just 2 steps to authenticate?! Nevermind, we’ll contact them later. Let’s go on for now.
They need a timestamp with microseconds precision. Here you are!
Nope. Come on! Take the damned timestamp! Nope. Let’s take a look at theirs. If it’s with milliseconds precision, WHY 7 digits after comma?!!!! We decided to contact them. And then.........their answer: we don’t know of any exact number of digits to represent milliseconds.
I see...so it’s arbitrary!!! What are you going to tell us next? One hour can be 3.14159265 minutes then?!!2 -
Faced a guy who tried to pin, his data not coming up on our app. After debugging I found out he was sending the timestamp as his local time + 'z'(e.g. 2019-06-13T22:38:54.143Z), thinking thats for est and blamed us for why his data is not showing at correct time.3
-
I need to compare the JSON results of an API before and after a code change. But it was also moved to another API.
However some fields are auto-generated like timestamp or derived off the url (resource links).
Also if a JSON list is returned it maybe in different order...
Wondering is there a quick way to test text likeness?
I've done it before but just used matching status code and maybe measuring the diff in response size7 -
Here's something that should be a standard rule for writing APIs:
When you offer a date filter for your API, the date format passed in should be a UNIX timestamp and not a literal date. For example,
Incorrect API URL format: '?start_date=2024-03-01&end_date=2024-04-01'
Correct API URL format: '?start_date=1709251200&end_date=1711929600'6 -
"In 5 minutes" (actually 6:52)
Wastes a good 3 minutes on introduction and "start eclipse".
Just to prevent some mental acrobats on here: yes, the timestamp is at 1.04, that's because the rest wouldn't have shown the actual slide. -
Why QA should never be left "in charge" of marking priorities on tasks before "demo day" deployment and client handover of a product.
New and refactored, key, features need to be deployed by "demo day", and most developers and the PM (not me) have already been re-allocated to new clients and projects. There's several things being done in paralell to get it done.
QA: We need to be able to download CSV files showing affected users if i do extremely rare action X, and this should pop up in the system for the first 24 hours after doing X.
Priority: High
New priority for feature Y: Medium
(Action X may never be used at all)
This is implemented, reviewed and deployed.
QA: I want a timestamp in the file naming, I'm experiencing duplicate files.
Priority: High
Feature Y: Medium
Develop, review and deploy timestamping for the CSV files.
QA: They are only marked with DD/MM/YYYY, I performed rare action X several times in one day, I can still get duplicate file names marked with numbers. This is #1 priority!
Priority: High
Feature Y: Medium
...Okay, this is nitpicking, this will never happen, but fine. Overtime to do the extra minor, minor adjustment, down to hours and minutes, get it reviewed and deployed at the end of the day.
QA: I managed to do rare action X 6 times in 1 minute, I have duplicate files. It needs to be down to seconds. This is top priority.
Priority: High
New priority for feature Y: Low
.........
Constant interruptions, moronic priorities and voicecalls throughout the entire day.
Dear QA, you can be fucking donkeys at times.4 -
Java8: "the prevoius 2 api set of handling datetime were cumbersome, and not friendly so we introduced a whole new set of api's from jodatime"
(Looks at the new api trying to figure out how to get milis difference between a date and a timestamp, wants to kill everything and everyone in sight)
If i have to Google every simple date operations someone needs to pay -
Ok, we were troubleshooting a network connection problem. My boss told me: use fping, a small command line utility that gives you a timestamped ping. We can then check when did the connection go down. Ok. Since I've always advocated the importance of knowing advanced scripting tools, i tried to do it with powershell. I've been playing with Test-Connection for an hour to try to get not only the timestamp when the connection is ok, but the timestamp when the connection is down. Don't want to go into details. I've just a question. A solution that allows you to do such an easy task in say 20 lines of code is the proof that the system works or that it doesn't work? To make long story short, now i'm downloading fiping.6
-
The slow typer who just discovered #define
#define Property AudioUnitPropertyID
#define Parameter AudioUnitParameterID
#define ParameterValue AudioUnitParameterValue
#define Format AudioFormat
#define FormatDescription AudioStreamBasicDescription
#define PacketDescription AudioStreamPacketDescription
#define ActionFlags AudioUnitRenderActionFlags
#define TimeStamp const AudioTimeStamp
#define IO ABufferList
#define PD APacketDescription
#define Count UInt32
#define DataSize UInt321 -
Deploy new script on production and then server time are outdated suddenly, plus old timestamp data inside db changed to outdated time. Who update the data inside db? Mindblow~2
-
firstly, does anyone know of an online telegram or whatsapp group where i can ask silly stuff regarding web dev and get immediate answers, just like it is here?
secondly i am trying to learn js and there are just too many related terms that are messing with my brain:
"some features are supported ines5/es6/es15/es16/es17 , some are supported in chrome's v8/chakra/spidermonkey/android browser , some features are only supported in "serverside" and blocked in all browsers thanks to browser's vm environment; babel can't read this code, some features are provided only by node js..."
WHAT THE MESS IS THIS?
All i am trying to do is to write code that would make a website visible to everyone. if by specific browser , i want to target, chrome and its subsidaries and android chrome/other android browsers .
for other browsers am willing to make external converters later but don't want to change my code by 1 bit. And from what i know, each browser (at least the browsers am thinking of supporting) has the complete JS compilers already built in
can i or can i not built a complete functional website with those things?
and finally my main question : how to make custom exceptions in vanilla js? i saw this answer on stack overflow:
===================
function InvalidArgumentException(message) {
this.message = message;
// Use V8's native method if available, otherwise fallback
if ("captureStackTrace" in Error)
Error.captureStackTrace(this, InvalidArgumentException);
else
this.stack = (new Error()).stack;
}
InvalidArgumentException.prototype = Object.create(Error.prototype);
InvalidArgumentException.prototype.name = "InvalidArgumentException";
InvalidArgumentException.prototype.constructor = InvalidArgumentException;
Usage:
throw new InvalidArgumentException();
var err = new InvalidArgumentException("Not yet...");
=====================================
where is the error code? what would be the exception details? what is the line number/timestamp of error?why is that function making an error, i thought error/exception is a class in JS?4 -
For me it must be the really specific things in the PHP core that are kept for historic reasons, especially this:
"easter_date — Get Unix timestamp for midnight on Easter of a given year" -
Currently working on a conversion of a tool we use to keep track of our working hours (like how much time did we spend on that task, that project etc.), because the old version of that language sucks ass and the database system sucks even more ass.
Besides the other stuff that's freaking horrible in that fucking shit tool (crashes when entering wrong input, etc.) - the genius that created that peace of crap (1997!) decided that he wants to use a fucking timestamp as a PK-column on some tables.
Why the fuck would you that?! Jesus fuckin' christ.
And of course, the fuckin apprentice has to deal with this shit and has to be finished yesterday x)3 -
Lost about 4 days debugging bug about date conversion between frontend to backend as an api request.
This shit is mad fucking annoying
The date format was always wrong.
So i gotta ask. Is it better to always have date fields as a Long which contains just a huge number that represents a timestamp, and that way whenever i want to see what date it is i would have to convert it every time on both frontend and backend from timestamp into LocalDateTime, or is it better to keep it as Date/LocalDateTime and not string/long, and that way risk fucking up the date format?
How is it done in real world projects? Whats the right way to do it and why?3 -
Firebase is a fucking piece of dog shit.
Testing is so bad and complicated to set up, I've spent two days trying to write ONE fucking simple test with an auth middleware via expressjs. Why firebase doesn't mock my dung, you pieces of shit. Even the documentation is all spread out, it's difficult and terrible to follow. I would rather build my own backend because of all the workarounds I have to make because of your limited SHIT product. Even the type libraries are shit, import Timestamp? NOPE. YOU HAVE TO IMPORT FIREBASE TO IMPORT A TIMESTAMP. Learn to define types, shitty google devs. You all suck, thanks for making shitty clients sdk's.
I hope this piece of shit gets deprecated and my clients stops using it.4 -
AS logcat
Sqlite.sqliteexception:no such column:Timestamp (code1):,while compiling : SELECT * FROM NOTEs ORDER BY Timestamp
I am trying to get the date and time on each entry of note..17