Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "time calculation"
-
Love how a teacher of mine described IO wait for CPUs on a blackboard.
"That's calculation time." *draws three small lines on the blackboard* And this is IO wait. *draws a really long line, goes out of the class, out of the school, comes back* "Yes, this is IO wait. No matter how good and fast your CPU in your gaming PC is, if your hard drive is shit, everything is shit."5 -
Probably the biggest one in my life.
TL:DR at the bottom
A client wanted to create an online retirement calculator, sounds easy enough , i said sure.
Few days later i get an email with an excel file saying the online version has to work exactly like this and they're on a tight deadline
Having a little experience with excel, i thought eh, what could possibly go wrong, if anything i can take off the calculations from the excel file
I WAS WRONG !!!
17 Sheets, Linking each other, Passing data to each sheet to make the calculation
( Sure they had lot of stuff to calculate, like age, gender, financial group etc etc )
First thing i said to my self was, WHAT THE FREAKING FUCK IS THIS ?, WHAT YEAR IS THIS ?
After messing with it for couple of hours just to get one calculation out of it, i gave up
Thought about making a mysql database with the cell data and making the calculations, but NOOOO.
Whoever made it decided to put each cell a excel calculation ( so even if i manage to get it into a database and recode all the calculations it would be wayyy pass the deadline )
Then i had an epiphany
"What if i could just parse the excel file and get the data ?"
Did a bit of research sure enough there's a php project
( But i think it was outdated and takes about 15-25 seconds to parse, and makes a copy of the original file )
But this seemed like the best option at the time.
So downloaded the library, finished the whole thing, wrote a cron job to delete temporary files, and added a loading spinner for that delay, so people know something is happening
( and had few days to spare )
Sent the demo link to client, they were very happy with it, cause it worked same as their cute little excel file and gave the same result,
It's been live on their website for almost a year now, lot of submissions, no complains
I was feeling bit guilty just after finishing it, cause i could've done better, but not anymore
Sorry for making it so long, to understand the whole thing, you need to know the full story
TL:DR - Replicated the functionality of a 17 sheet excel calculator in php hack-ishly.8 -
it's funny, how doing something for ages but technically kinda the wrong way, makes you hate that thing with a fucking passion.
In my case I am talking about documentation.
At my study, it was required to write documentation for every project, which is actually quite logical. But, although I am find with some documentation/project and architecture design, they went to the fucking limit with this shit.
Just an example of what we had to write every time again (YES FOR EVERY MOTHERFUCKING PROJECT) and how many pages it would approximately cost (of custom content, yes we all had templates):
Phase 1 - Application design (before doing any programming at all):
- PvA (general plan for how to do the project, from who was participating to the way of reporting to your clients and so on - pages: 7-10.
- Functional design, well, the application design in an understandeable way. We were also required to design interfaces. (Yes, I am a backender, can only grasp the basics of GIMP and don't care about doing frontend) - pages: 20-30.
- Technical design (including DB scheme, class diagrams and so fucking on), it explains it mostly I think so - pages: 20-40.
Phase 2 - 'Writing' the application
- Well, writing the application of course.
- Test Plan (so yeah no actual fucking cases yet, just how you fucking plan to test it, what tools you need and so on. Needed? Yes. but not as redicilous as this) - pages: 7-10.
- Test cases: as many functions (read, every button click etc is a 'function') as you have - pages: one excel sheet, usually at least about 20 test cases.
Phase 3 - Application Implementation
- Implementation plan, describes what resources will be needed and so on (yes, I actually had to write down 'keyboard' a few times, like what the actual motherfucking fuck) - pages: 7-10.
- Acceptation test plan, (the plan and the actual tests so two files of which one is an excel/libreoffice calc file) - pages: 7-10.
- Implementation evalutation, well, an evaluation. Usually about 7-10 FUCKING pages long as well (!?!?!?!)
Phase 4 - Maintaining/managing of the application
- Management/maintainence document - well, every FUCKING rule. Usually 10-20 pages.
- SLA (Service Level Agreement) - 20-30 pages.
- Content Management Plan - explains itself, same as above so 20-30 pages (yes, what the fuck).
- Archiving Document, aka, how are you going to archive shit. - pages: 10-15.
I am still can't grasp why they were surprised that students lost all motivation after realizing they'd have to spend about 1-2 weeks BEFORE being allowed to write a single line of code!
Calculation (which takes the worst case scenario aka the most pages possible mostly) comes to about 230 pages. Keep in mind that some pages will be screenshots etc as well but a lot are full-text.
Yes, I understand that documentation is needed but in the way we had to do it, sorry but that's just not how you motivate students to work for their study!
Hell, students who wrote the entire project in one night which worked perfectly with even easter eggs and so on sometimes even got bad grades BECAUSE THEIR DOCUMENTATION WASN'T GOOD ENOUGH.
For comparison, at my last internship I had to write documentation for the REST API I was writing. Three pages, providing enough for the person who had to, to work with it! YES THREE PAGES FOR THE WHOLE MOTHERFUCKING PROJECT.
This is why I FUCKING HATE the word 'documentation'.36 -
These fuckface wantrapeneurs, posting jobs (paying to do so) and then offering bullshit like:
- We have no funding, so you'll work for free for some time.
- Paying in fucking crypto.
- Wanting a full stack rainbow puking and shitting unicorn for peanuts
- Fucking scammers, posing as legit companies and asking you to install Anydesk.
- Asking absurd interview tasks and times (a couple of days worth of work for a task).
- Whiteboard and live coding interviews with bullshit questions thinking they're Google, while having 20 devs.
- Negotiating salaries and when presented with contract get the salary reduced by double the amount.
- Having idiotic shit on their company websites like a fucking dog as a team member associated as happiness asshole. (One idiot even had a labrador during the video interview while cuddling him)
- Companies asking you to install tracking software with cam recording to keep you in check. (Yeah, you can go fuck yourselves)
- Having absurd compensation schemes, like pay calculation based on the "impact" your work has
Either I'm unlucky or job hunting has become something else since I last started searching.4 -
So I have a teacher that when he use "C++" it is basically C with a .cpp file-extension and -O0 compiler flag.
Last assignment was to implement some arbitrary lengthy calculation with a tight requirement of max 1 second runtime, to force us to basically handroll C code without using std and any form of abstraction. But because the language didn’t freeze in time 1998, there is a little keyword named "constexpr" that folded all my classes, arrays, iterators, virtual methods, std::algorithms etc, into a single return statement. Thus making my code the fastest submitted.
Lesson of the story, use the language to the fullest and always turn on the damn optimizer
Ok now I’m done 😚7 -
POSTMORTEM
"4096 bit ~ 96 hours is what he said.
IDK why, but when he took the challenge, he posted that it'd take 36 hours"
As @cbsa wrote, and nitwhiz wrote "but the statement was that op's i3 did it in 11 hours. So there must be a result already, which can be verified?"
I added time because I was in the middle of a port involving ArbFloat so I could get arbitrary precision. I had a crude desmos graph doing projections on what I'd already factored in order to get an idea of how long it'd take to do larger
bit lengths
@p100sch speculated on the walked back time, and overstating the rig capabilities. Instead I spent a lot of time trying to get it 'just-so'.
Worse, because I had to resort to "Decimal" in python (and am currently experimenting with the same in Julia), both of which are immutable types, the GC was taking > 25% of the cpu time.
Performancewise, the numbers I cited in the actual thread, as of this time:
largest product factored was 32bit, 1855526741 * 2163967087, took 1116.111s in python.
Julia build used a slightly different method, & managed to factor a 27 bit number, 103147223 * 88789957 in 20.9s,
but this wasn't typical.
What surprised me was the variability. One bit length could take 100s or a couple thousand seconds even, and a product that was 1-2 bits longer could return a result in under a minute, sometimes in seconds.
This started cropping up, ironically, right after I posted the thread, whats a man to do?
So I started trying a bunch of things, some of which worked. Shameless as I am, I accepted the challenge. Things weren't perfect but it was going well enough. At that point I hadn't slept in 30~ hours so when I thought I had it I let it run and went to bed. 5 AM comes, I check the program. Still calculating, and way overshot. Fuuuuuuccc...
So here we are now and it's say to safe the worlds not gonna burn if I explain it seeing as it doesn't work, or at least only some of the time.
Others people, much smarter than me, mentioned it may be a means of finding more secure pairs, and maybe so, I'm not familiar enough to know.
For everyone that followed, commented, those who contributed, even the doubters who kept a sanity check on this without whom this would have been an even bigger embarassement, and the people with their pins and tactical dots, thanks.
So here it is.
A few assumptions first.
Assuming p = the product,
a = some prime,
b = another prime,
and r = a/b (where a is smaller than b)
w = 1/sqrt(p)
(also experimented with w = 1/sqrt(p)*2 but I kept overshooting my a very small margin)
x = a/p
y = b/p
1. for every two numbers, there is a ratio (r) that you can search for among the decimals, starting at 1.0, counting down. You can use this to find the original factors e.x. p*r=n, p/n=m (assuming the product has only two factors), instead of having to do a sieve.
2. You don't need the first number you find to be the precise value of a factor (we're doing floating point math), a large subset of decimal values for the value of a or b will naturally 'fall' into the value of a (or b) + some fractional number, which is lost. Some of you will object, "But if thats wrong, your result will be wrong!" but hear me out.
3. You round for the first factor 'found', and from there, you take the result and do p/a to get b. If 'a' is actually a factor of p, then mod(b, 1) == 0, and then naturally, a*b SHOULD equal p.
If not, you throw out both numbers, rinse and repeat.
Now I knew this this could be faster. Realized the finer the representation, the less important the fractional digits further right in the number were, it was just a matter of how much precision I could AFFORD to lose and still get an accurate result for r*p=a.
Fast forward, lot of experimentation, was hitting a lot of worst case time complexities, where the most significant digits had a bunch of zeroes in front of them so starting at 1.0 was a no go in many situations. Started looking and realized
I didn't NEED the ratio of a/b, I just needed the ratio of a to p.
Intuitively it made sense, but starting at 1.0 was blowing up the calculation time, and this made it so much worse.
I realized if I could start at r=1/sqrt(p) instead, and that because of certain properties, the fractional result of this, r, would ALWAYS be 1. close to one of the factors fractional value of n/p, and 2. it looked like it was guaranteed that r=1/sqrt(p) would ALWAYS be less than at least one of the primes, putting a bound on worst case.
The final result in executable pseudo code (python lol) looks something like the above variables plus
while w >= 0.0:
if (p / round(w*p)) % 1 == 0:
x = round(w*p)
y = p / round(w*p)
if x*y == p:
print("factors found!")
print(x)
print(y)
break
w = w + i
Still working but if anyone sees obvious problems I'd LOVE to hear about it.38 -
I've decided that whenever a non technical person be it a client or a non technical PM tells me it's easy or tells me it'll take only x hours, I'm going to tell them to either do it themselves, or let me do my estimate calculation. You don't fucking understand one line of what I do yet you can magically calculate the amount of time I'll take on the task? No fucking thank you sir.2
-
Real-time physics calculation of +250000 blocks on AMD Athlon.
No worries boys, I had fire extinguisher.4 -
*tries to shrink an NTFS volume in preparation for a new BTRFS volume*
(shameless ad: check out https://github.com/maharmstone/...! BTRFS on Windows, how cool is that?)
Windows Disk Management: ah surely, I can do that for you.
*clicks "shrink"*
…
Well that disk calculation process is taking a long time...
*checks Task Manager*
*notices a pretty disk-intensive defrag process*
… Yeah.. defragging. Seems reasonable. Guess I'll just let it finish its defragmentation process. After that it should just be able to shrink the NTFS filesystem and modify the partition table without any issues. After all, I've done this manually in Linux before, and after defragging (to relocate the files on the leftmost sectors of the disk) it finished in no time.
*defrag finishes*
Alright, time to shrink!
….
Taking a shitton of time...
*checks Task Manager again*
System taking a lot of disk this time.. not even a defrag? How long can this shit take at 40MB/s simultaneous read and write?
…
*many minutes passed, finished that episode of Elfen Lied, still ongoing...*
Fucking piece of Microshit. Are you really copying over the entire 1.3TB that that disk is storing?! Inefficient piece of crap.. living up to the premise of Shitware indeed!!!15 -
Got my first serious project about a year ago. Made it clear to the client that we are developing a Windows app. After around 80-100 hours of work client just goes "how about we make this a web app?" Got a "financial support" instead of the agreed payment. Got around 4 times less money than agreed upon. They never ended up using some parts of the software (I ran the server so I knew that they weren't using it)
I once had a nightmare explaining to the client that he cannot use a 30+ MB image as his home page background. Average internet speed in my country is around 1-2 MB/s. I even had to do the calculation for him because he couldn't figure out the time it took for the visitor to load the image.3 -
So ok here it is, as asked in the comments.
Setting: customer (huge electronics chain) wants a huge migration from custom software to SAP erp, hybris commere for b2b and ... azure cloud
Timeframe: ~10 months….
My colleague and me had the glorious task to make the evaluation result of the B2B approval process (like you can only buy up till € 1000, then someone has to approve) available in the cart view, not just the end of the checkout. Well I though, easy, we have the results, just put them in the cart … hmm :-\
The whole thing is that the the storefront - called accelerator (although it should rather be called decelerator) is a 10-year old (looking) buggy interface, that promises to the customers, that it solves all their problems and just needs some minor customization. Fact is, it’s an abomination, which makes us spend 2 months in every project to „ripp it apart“ and fix/repair/rebuild major functionality (which changes every 6 months because of „updates“.
After a week of reading the scarce (aka non-existing) docs and decompiling and debugging hybris code, we found out (besides dozends of bugs) that this is not going to be easy. The domain model is fucked up - both CartModel and OrderModel extend AbstractOrderModel. Though we only need functionality that is in the AbstractOrderModel, the hybris guys decided (for an unknown reason) to use OrderModel in every single fucking method (about 30 nested calls ….). So what shall we do, we don’t have an order yet, only a cart. Fuck lets fake an order, push it through use the results and dismiss the order … good idea!? BAD IDEA (don’t ask …). So after a week or two we changed our strategy: create duplicate interface for nearly all (spring) services with changed method signatures that override the hybris beans and allow to use CartModels (which is possible, because within the super methods, they actually „cast" it to AbstractOrderModel *facepalm*).
After about 2 months (2 people full time) we have a working „prototype“. It works with the default-sample-accelerator data. Unfortunately the customer wanted to have it’s own dateset in the system (what a shock). Well you guess it … everything collapsed. The way the customer wanted to "have it working“ was just incompatible with the way hybris wants it (yeah yeah SAP, hybris is sooo customizable …). Well we basically had to rewrite everything again.
Just in case your wondering … the requirements were clear in the beginning (stick to the standard! [configuration/functinonality]). Well, then the customer found out that this is shit … and well …
So some months later, next big thing. I was appointed technical sublead (is that a word)/sub pm for the topics‚delivery service‘ (cart, delivery time calculation, u name it) and customerregistration - a reward for my great work with the b2b approval process???
Customer's office: 20+ people, mostly SAP related, a few c# guys, and drumrole .... the main (external) overall superhero ‚im the greates and ur shit‘ architect.
Aberage age 45+, me - the ‚hybris guy’ (he really just called me that all the time), age 32.
He powerpoints his „ tables" and other weird out of this world stuff on the wall, talks and talks. Everyone is in awe (or fear?). Everything he says is just bullshit and I see it in the eyes of the others. Finally the hybris guy interrups him, as he explains the overall architecture (which is just wrong) and points out how it should be (according to my docs which very more up to date. From now on he didn't just "not like" me anymore. (good first day)
I remember the looks of the other guys - they were releaved that someone pointed that out - saved the weeks of useless work ...
Instead of talking the customer's tongue he just spoke gibberish SAP … arg (common in SAP land as I had to learn the hard way).
Outcome of about (useless) 5 meetings later: we are going to blow out data from informatica to sap to azure to datahub to hybris ... hmpf needless to say its fucking super slow.
But who cares, I‘ll get my own rest endpoint that‘ll do all I need.
First try: error 500, 2. try: 20 seconds later, error message in html, content type json, a few days later the c# guy manages to deliver a kinda working still slow service, only the results are wrong, customer blames the hybris team, hmm we r just using their fucking results ...
The sap guys (customer service) just don't seem to be able to activate/configure the OOTB odata service, so I was told)
Several email rounds, meetings later, about 2 months, still no working hybris integration (all my emails with detailed checklists for every participent and deadlines were unanswered/ignored or answered with unrelated stuff). Customer pissed at us (god knows why, I tried, I really did!). So I decide to fly up there to handle it all by myself16 -
Years ago, we were setting up an architecture where we fetch certain data as-is and throw it in CosmosDb. Then we run a daily background job to aggregate and store it as structured data.
The problem is the volume. The calculation step is so intense that it will bring down the host machine, and the insert step will bring down the database in a manner where it takes 30 min or more to become accessible again.
Accommodating for this would need a fundamental change in our setup. Maybe rewriting the queries, data structure, containerizing it for auto scaling, whatever. Back then, this wasn't on the table due to time constraints and, nobody wanted to be the person to open that Pandora's box of turning things upside down when it "basically works".
So the hotfix was to do a 1 second threadsleep for every iteration where needed. It makes the job take upwards of 12 hours where - if the system could endure it - it would normally take a couple minutes.
The solution has grown around this behavior ever since, making it even harder to properly fix now. Whenever there is a new team member there is this ritual of explaining this behavior to them, then discussing solutions until they realize how big of a change it would be, and concluding that it needs to be done, but...
not right now.2 -
I hate the feeling you get when you do a lengthy, drooling task that once finished got you nowhere.
My day was mostly productive for a Sunday, woke up late as all Sundays, spent the afternoon writing a proposal and exercising when I saw a notification for a homework for tonight at 12.
A research paper about Dijkstra's philosopher problem, 8 pages minimum. To be honest I've seen the problem a long time ago while studying C++ and I had the theory down and that is my issue, it becomes inherently boring and useless in my head. Is in this situations that my mind gets lazy.
I wrote the first 3 pages in half an hour but I was done, I started revising the proposal and fixed a calculation error, checked Rust's take on the philosophers issue and decided to save it for winter break along with learning Rust (although got some basics down), made rough budget approximations for the next 3 months, lost myself a little bit on deep house music (notable tracks tadow from masego, nevermind - Dennis Lloyd and gold - Chet faker), etc...all in all it took me 3 hours more to finish the assignment, including breaks and dinner.
I am working on a lot of stuff lately and my main project's sprint ends this Tuesday and it pisses me off, after all that I learnt nothing new, got nowhere with my project and will probably get 80 because Google docs has no margin setting. Worse than being lazy for fun is inevitably being lazy for being compelled to do low priority tasks by your head's standards.6 -
At the institute I did my PhD everyone had to take some role apart from research to keep the infrastructure running. My part was admin for the Linux workstations and supporting the admin of the calculation cluster we had (about 11 machines with 8 cores each... hot shit at the time).
At some point the university had some euros of budget left that had to be spent so the institute decided to buy a shiny new NAS system for the cluster.
I wasn't really involved with the stuff, I was just the replacement admin so everything was handled by the main admin.
A few months on and the cluster starts behaving ... weird. Huge CPU loads, lots of network traffic. No one really knows what's going on. At some point I discover a process on one of the compute nodes that apparently receives commands from an IRC server in the UK... OK code red, we've been hacked.
First thing we needed to find out was how they had broken in, so we looked at the logs of the compute nodes. There was nothing obvious, but the fact that each compute node had its own public IP address and was reachable from all over the world certainly didn't help.
A few hours of poking around not really knowing what I'm looking for, I resort to a TCPDUMP to find whether there is any actor on the network that I might have overlooked. And indeed I found an IP adress that I couldn't match with any of the machines.
Long story short: It was the new NAS box. Our main admin didn't care about the new box, because it was set up by an external company. The guy from the external company didn't care, because he thought he was working on a compute cluster that is sealed off behind some uber-restrictive firewall.
So our shiny new NAS system, filled to the brink with confidential research data, (and also as it turns out a lot of login credentials) was sitting there with its quaint little default config and a DHCP-assigned public IP adress, waiting for the next best rookie hacker to try U:admin/P:admin to take it over.
Looking back this could have gotten a lot worse and we were extremely lucky that these guys either didn't know what they had there or didn't care. -
Background story:
One of the projects I develop generates advice based on energy usage and a questionare with 300 questions.
Over 400 different variables determine what kind of advice is given. Lots of userinput and over a thousand textblocks that need to show or not.
Rant:
WTF do you want me to do when you tell me. It's not giving the right advice for the lights.
Why the for the love of.. do I need to ask you everytime. If something is not working. Tell me what and for wich user. Don't tell me calculation whatever is not working, I don't know that calculation. Your calculations are maintainable in your cms.
And how, like I really wonder, do you expect me, when not telling me what user is having this problemen to find and fix it, You just want me to random guess one of the thousands users that should be given that specific advice?
FCK, like 80% of my time solving problems is spend trying to figure out wtf your talking about.
And then what a miricale the function is doing exactly what is it doing but you forgot a variable. It's not like the code I write suddenly decides it does not feel like giving the right answer.3 -
TLDR; I was editing the wrong file, let's go to bed.
We have this huge system that receives data from an API endpoint, does a whole bunch of stuff, going through three other servers, and then via some calculation based on the data received from the UI, and data received from the endpoint, it finally sends the calculated fields to the UI via websocket.
Poor me sitting for over 4 hours debugging and changing values in the logic file trying to understand why one of the fields ends up being null.
Of course every change needs a reboot to all the 4 servers involved, and a hard refresh of the UI.
I even tried to search for the word null in that file, but to no avail.
After scattering hundreds of console logs, and pulling my hair out, I found out that I am editing the wrong file.
I guess it's time for some sleep.1 -
So recently I had an argument with gamers on memory required in a graphics card. The guy suggested 8GB model of.. idk I forgot the model of GPU already, some Nvidia crap.
I argued on that, well why does memory size matter so much? I know that it takes bandwidth to generate and store a frame, and I know how much size and bandwidth that is. It's a fairly simple calculation - you take your horizontal and vertical resolution (e.g. 2560x1080 which I'll go with for the rest of the rant) times the amount of subpixels (so red, green and blue) times the amount of bit depth (i.e. the amount of values you can set the subpixel/color brightness to, usually 8 bits i.e. 0-255).
The calculation would thus look like this.
2560*1080*3*8 = the resulting size in bits. You can omit the last 8 to get the size in bytes, but only for an 8-bit display.
The resulting number you get is exactly 8100 KiB or roughly 8MB to store a frame. There is no more to storing a frame than that. Your GPU renders the frame (might need some memory for that but not 1000x the amount of the frame itself, that's ridiculous), stores it into a memory area known as a framebuffer, for the display to eventually actually take it to put it on the screen.
Assuming that the refresh rate for the display is 60Hz, and that you didn't overbuild your graphics card to display a bazillion lost frames for that, you need to display 60 frames a second at 8MB each. Now that is significant. You need 8x60MB/s for that, which is 480MB/s. For higher framerate (that's hopefully coupled with a display capable of driving that) you need higher bandwidth, and for higher resolution and/or higher bit depth, you'd need more memory to fit your frame. But it's not a lot, certainly not 8GB of video memory.
Question time for gamers: suppose you run your fancy game from an iGPU in a laptop or whatever, with 8GB of memory in that system you're resorting to running off the filthy iGPU from. Are you actually using all that shared general-purpose RAM for frames and "there's more to it" juicy game data? Where does the rest of the operating system's memory fit in such a case? Ahhh.. yeah it doesn't. The iGPU magically doesn't use all that 8GB memory you've just told me that the dGPU totally needs.
I compared it to displaying regular frames, yes. After all that's what a game mostly is, a lot of potentially rapidly changing frames. I took the entire bandwidth and size of any unique frame into account, whereas the display of regular system tasks *could* potentially get away with less, since most of the frame is unchanging most of the time. I did not make that assumption. And rapidly changing frames is also why the bitrate on e.g. screen recordings matters so much. Lower bitrate means that you will be compromising quality in rapidly changing scenes. I've been bit by that before. For those cases it's better to have a huge source file recorded at a bitrate that allows for all these rapidly changing frames, then reduce the final size in post-processing.
I've even proven that driving a 2560x1080 display doesn't take oodles of memory because I actually set the timings for such a display in order for a Raspberry Pi to be able to drive it at that resolution. Conveniently the memory split for the overall system and the GPU respectively is also tunable, and the total shared memory is a relatively meager 1GB. I used to set it at 256MB because just like the aforementioned gamers, I thought that a display would require that much memory. After running into issues that were driver-related (seems like the VideoCore driver in Raspbian buster is kinda fuckulated atm, while it works fine in stretch) I ended up tweaking that a bit, to see what ended up working. 64MB memory to drive a 2560x1080 display? You got it! Because a single frame is only 8MB in size, and 64MB of video memory can easily fit that and a few spares just in case.
I must've sucked all that data out of my ass though, I've only seen people build GPU's out of discrete components and went down to the realms of manually setting display timings.
Interesting build log / documentary style video on building a GPU on your own: https://youtube.com/watch/...
Have fun!20 -
TIL how time-delay relays are used in circuits with a hell lot of power. For example to power and control the motor of a machine that lifts 20 people up to the sky and back to ground again.
FYI this is a simple circuit. We will hopefully start experimenting with SPS/Speicherprogrammierbare Steuerung (Programmable Logic Controllers). I cannot wait to make use of our predesigned circuits with complex logic gates with the new unit we will get to know.
Unfortunately, we will do practical networking (testing cables for the signal strength and speed, building a network of telephones and call each other - guess this is going to be the funniest part lol, etc.) before the SPS and LOGO software phase.
Btw. I am doing an all-nighter rn and repeating everything we did in our recent signal calculation lesson. We have another exam tomorrow.11 -
My department vp announced to our sales team that our company would have an online Web based catalog capable of custom style configuration, pricing calculation, quote generation. the catalog would manage all of our customers different discount rates automatically and notify the correct sales rep any time a customer completes a quote. This was announced on September 30(ish) with a launch date of December 1. I'm the only Web developer and I found out afer the announcement. :/2
-
Worst architecture I've seen?
The worst (working here) follow the academic pattern of trying to be perfect when the only measure of 'perfect' should be the user saying "Thank you" or one that no one knows about (the 'it just works' architectural pattern).
A senior developer with a masters degree in software engineering developed a class/object architecture for representing an Invoice in our system. Took almost 3 months to come up with ..
- Contained over 50 interfaces (IInvoice, IOrder, IProduct, etc. mostly just data bags)
- Abstract classes that implemented the interfaces
- Concrete classes that injected behavior via the abstract classes (constructors, Copy methods, converter functions, etc)
- Various data access (SQL server/WCF services) factories
During code reviews I kept saying this design was too complex and too brittle for the changes everyone knew were coming. The web team that would ultimately be using the framework had, at best, vague requirements. Because he had a masters degree, he knew best.
He was proud of nearly perfect academic design (almost 100% test code coverage, very nice class diagrams, lines and boxes, auto-generated documentation, etc), until the DBAs changed table relationships (1:1 turned into 1:M and M:M), field names, etc, and users changed business requirements (ex. concept of an invoice fee changed the total amount due calculation, which broke nearly everything).
That change caused a ripple affect that resulted in a major delay in the web site feature release.
By the time the developer fixed all the issues, the web team wrote their framework and hit the database directly (Dapper+simple DTOs) and his library was never used.1 -
Yep....nothing wrong here.
Seems like a good time calculation.
Just Origin being Origin.
And that's when you know it time to go back to Steam :) -
I had the idea that part of the problem of NN and ML research is we all use the same standard loss and nonlinear functions. In theory most NN architectures are universal aproximators. But theres a big gap between symbolic and numeric computation.
But some of our bigger leaps in improvement weren't just from new architectures, but entire new approaches to how data is transformed, and how we calculate loss, for example KL divergence.
And it occured to me all we really need is training/test/validation data and with the right approach we can let the system discover the architecture (been done before), but also the nonlinear and loss functions itself, and see what pops out the other side as a result.
If a network can instrument its own code as it were, maybe it'd find new and useful nonlinear functions and losses. Networks wouldn't just specificy a conv layer here, or a maxpool there, but derive implementations of these all on their own.
More importantly with a little pruning, we could even use successful examples for bootstrapping smaller more efficient algorithms, all within the graph itself, and use genetic algorithms to mix and match nodes at training time to discover what works or doesn't, or do training, testing, and validation in batches, to anneal a network in the correct direction.
By generating variations of successful nodes and graphs, and using substitution, we can use comparison to minimize error (for some measure of error over accuracy and precision), and select the best graph variations, without strictly having to do much point mutation within any given node, minimizing deleterious effects, sort of like how gene expression leads to unexpected but fitness-improving results for an entire organism, while point-mutations typically cause disease.
It might seem like this wouldn't work out the gate, just on the basis of intuition, but I think the benefit of working through node substitutions or entire subgraph substitution, is that we can check test/validation loss before training is even complete.
If we train a network to specify a known loss, we can even have that evaluate the networks themselves, and run variations on our network loss node to find better losses during training time, and at some point let nodes refer to these same loss calculation graphs, within themselves, switching between them dynamically..via variation and substitution.
I could even invision probabilistic lists of jump addresses, or mappings of value ranges to jump addresses, or having await() style opcodes on some nodes that upon being encountered, queue-up ticks from upstream nodes whose calculations the await()ed node relies on, to do things like emergent convolution.
I've written all the classes and started on the interpreter itself, just a few things that need fleshed out now.
Heres my shitty little partial sketch of the opcodes and ideas.
https://pastebin.com/5yDTaApS
I think I'll teach it to do convolution, color recognition, maybe try mnist, or teach it step by step how to do sequence masking and prediction, dunno yet.6 -
So I recently finished a rewrite of a website that processes donations for nonprofits. Once it was complete, I would migrate all the data from the old system to the new system. This involved iterating through every transaction in the database and making a cURL request to the new system's API. A rough calculation yielded 16 hours of migration time.
The first hour or two of the migration (where it was creating users) was fine, no issues. But once it got to the transaction part, the API server would start using more and more RAM. Eventually (30 minutes), it would start doing OOMs and the such. For a while, I just assumed the issue was a lack of RAM so I upgraded the server to 16 GB of RAM.
Running the script again, it would approach the 7 GiB mark and be maxing out all 8 CPUs. At this point, I assumed there was a memory leak somewhere and the garbage collector was doing it's best to free up anything it could find. I scanned my code time and time again, but there was no place I was storing any strong references to anything!
At this point, I just sort of gave up. Every 30 minutes, I would restart the server to fix the RAM and CPU issue. And all was fine. But then there was this one time where I tried to kill it, but I go the error: "fork failed: resource temporarily unavailable". Up until this point, I believed this was simply a lack of memory...but none of my SWAP was in use! And I had 4 GiB of cached stuff!
Now this made me really confused. So I did one search on the Internet and apparently this can be caused by many things: a lack of file descriptors or even too many threads. So I did some digging, and apparently my app was using over 31 thousands threads!!!!! WTF!
I did some more digging, and as it turns out, I never called close() on my network objects. Thus leaving ~30 new "worker" threads per iteration of the migration script. Thanks Java, if only finalize() was utilized properly.1 -
I'm covering for a colleague who has 2 weeks of vacation. Everything is made with Drupal 7, and it's a backend + frontend chimera with no head and 50 anuses.
So, last monday i get told i have to show a value based on the formula:
value * (rate1 - rate2) / 2
On thursday, every calculation in that page is suddenly wrong and I get balmed for it. Turns out, now it has to be:
value * (rate1 - rate2) / rate1 / 2
Today, I get told again the calculations are wrong. "It has to be wrong, the amount changes when rate1 changes!". There'll be a meeting later today to discuss such behaviour.
All these communications happened via e-mail, so I'm quite sure it's not my fault... But, SERIOUSLY! Do they think programmers' time is worthless? Now I'll have to waste at least 1 hour in a useless meeting because they cba to THINK before giving out specs?!
Goddammit. Nice monday.2 -
Fucking hell my insights are late ones...
So I am working with fluid dynamics simulation. I went home fired up the laptop and started the calculations. This is how the events went:
9 PM: starting the calculation
10 PM: checking on the graphs to see whether everything will be alright if I leave it running. Then went to sleep.
2 AM: Waking up in shock, that I forgot to turn on autosave after every time step. Then reassured myself that this is only a test and I won't need the previous results anyway.
5 AM: waking up, everything seems to be fine. I pause the calculation hibernate the laptop and went to work.
6:40 AM on my way to the front door a stray thought struck into my mind... What if it lost contact with the licence server, while entering hibernated state. Bah never mind... It will establish a new connection when I switch it back on.
6.45 AM Switching on the laptop. Two error messages greet me.
1. Lost contact with license server.
2. Abnormal exit.
Looking on the tray the paused simulation is gone. Since I didn't enabled autosave, I have to start it all over again. Well. Lesson learned I guess. Too bad it cost 8 hours of CPU time.2 -
In a e-commerce project my client asked for tax calculation with static some x value all the time.
Me: As promised your site is live and please check it.
Client: Checks everything. I want this tax to be dynamic.
Me: That was not mentioned earlier. Now I need to redo the design which takes much time.
Client: You will just change the addition at the end then what's with the design change.
Me: ( I killed him already in my mind)
Truestory 🍻2 -
What would you say to an employer offering a job at a starting point of only a percentage of the top salary you'll earn once the project is finished? Let's say they offered to pay you $40k to work full time building a highly complex website with multiple API integrations and sophisticated estimate calculation workflows, plus you'd be doing marketing and copywriting, with a goal of achieving 300 signups a day. In the middle of the project they boost the pay to, say, $60k salary rate. At the 12 month mark, which is the final "launch" date anticipated (and onward), you get $100k/year and you're the only person paid to do All The Things.
Oh, and also, the previous person they hired to do it failed to deliver and was let go.
Would you turn that down because to earn so little at first on the speculation that the venture will be a success and that you'll eventually get to the $100k level, plus their failure with the previous person, is too much of a gamble? Asking for a friend. ;)5 -
Welp, how much longer till someone building some magic to crack any modern encryption in blink of an eye.
...
tl;dr Google claimed it has managed to cut calculation time to 200 seconds from what it says would take a traditional computer 10,000 years to complete.
https://nydailynews.com/news/...20 -
Lately I am facing this issue. I spend a lot of time and did hard work on some specific thing! but it doesn’t seem to work as it should be, so because of this I am disappointed. I don’t know I should be feeling like this or not, but I am questing myself that I am a good developer or not!
It’s not like that I don’t know stuff, I start working on laravel , few months ago. It’s been 4 months, and I already develop a backend of the whole app, but it was not that complex. Recently , I am assigned to this new project, which is very complex, and It was already made by some other developer, so I am new to this and I don’t know how it actually works. But I was assigned to add new functionality to it, and It was kinda complex, like maths and calculation and depending upon the data coming and updating calculations changed. So, I almost work hard and over time for this, and I think I did a good job, but turns out it didn’t. So, I worked again on this, but again turns out it didn’t work out as it suppose to be.
So , After all of my hard work, the code was not right, and that led me to question myself and I am feeling bad. Is this normal? Is it okay to feel at something?3