Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "more layers"
-
New for avatars - emotions! You can now change your facial expression on your avatar to better capture your dev mood! Getting expressions working right turned out to be quite the undertaking due to the ripple effect of the various layers that each expression touched so our total layers just for men ballooned out from 300 layers to 1100. And @dfox re-architecting how layers work to handle the interconnectedness of expression meant tying together facial expression, skin tone, facial hair, and hair color to make sure everything stays in sync. It’s a fun new addition, I hope everyone enjoys!
I also want to apologize for the delay in getting this out, I meant to have this done ages ago but I got thrown a curveball at work and was laid off back in April and have been super stressed running around trying to find a new job for the past 3 months. I figured I’d have more free time to work on devRant, but hunting for work is so exhausting, it’s really taken its toll emotionally and financially (no unemployment benefits because according to my state even though we lose money every month “you’re still a corporate officer”). Things are finally looking promising on the job search front, and I expect once things get back to normal @dfox and I can get our release velocity back up, but until then, please bear with me.
P.S. If you have the resources, we certainly do appreciate your support with devRant++ Your monthly contributions really do make a difference! Thanks all!44 -
29-year veteran here. Began programming professionally in 1990, writing BASIC applications for an 8-bit Apple II+ computer. Learned Pascal, C, Clipper, COBOL. Ironic side-story: back then, my university colleagues and I used to make fun of old COBOL programmers. Fortunately, I never had to actually work with the language, but the knowledge allowed me to qualify for a decent job position, back in '92.
For a while, I worked with an IBM mainframe, using REXX and EXEC2 scripting languages for the VM/SP operating system. Then I began programming for the web, wrote my first dynamic web applications with cgi-bin shell and Perl scripts. Used the little-known IBM Net.Data scripting language. I finally learned PHP and settled with it for many, many years.
I always wanted to be a programmer. As a kid I dreamed of being like Kevin Flynn, of TRON - create world famous videogames and live upstairs my own arcade place! Later on, at some point, I was disappointed, I questioned my skills, I thought I should do more, I let other people's expectations make feel bad. Then I finally realized I actually enjoy a quieter, simpler life. And I made peace with it.
I'm now like the old programmers I used to mock 30 years ago. There's so much shit inside my brain. And everything seems so damn complex these days. Frameworks, package managers, transpilers, layers and more layers of code. I try to keep up. And the more I learn, the more it seems I don't know.
Sometimes I feel tired. Yet, I still enjoy creating things and solving problems with programming. I still have fun learning. And after all these years, I learned to be proud of my work, even if it didn't turn out to be as glamorous as in the movies.30 -
Worst dev team failure I've experienced?
One of several.
Around 2012, a team of devs were tasked to convert a ASPX service to WCF that had one responsibility, returning product data (description, price, availability, etc...simple stuff)
No complex searching, just pass the ID, you get the response.
I was the original developer of the ASPX service, which API was an XML request and returned an XML response. The 'powers-that-be' decided anything XML was evil and had to be purged from the planet. If this thought bubble popped up over your head "Wait a sec...doesn't WCF transmit everything via SOAP, which is XML?", yes, but in their minds SOAP wasn't XML. That's not the worst WTF of this story.
The team, 3 developers, 2 DBAs, network administrators, several web developers, worked on the conversion for about 9 months using the Waterfall method (3~5 months was mostly in meetings and very basic prototyping) and using a test-first approach (their own flavor of TDD). The 'go live' day was to occur at 3:00AM and mandatory that nearly the entire department be on-sight (including the department VP) and available to help troubleshoot any system issues.
3:00AM - Teams start their deployments
3:05AM - Thousands and thousands of errors from all kinds of sources (web exceptions, database exceptions, server exceptions, etc), site goes down, teams roll everything back.
3:30AM - The primary developer remembered he made a last minute change to a stored procedure parameter that hadn't been pushed to production, which caused a side-affect across several layers of their stack.
4:00AM - The developer found his bug, but the manager decided it would be better if everyone went home and get a fresh look at the problem at 8:00AM (yes, he expected everyone to be back in the office at 8:00AM).
About a month later, the team scheduled another 3:00AM deployment (VP was present again), confident that introducing mocking into their testing pipeline would fix any database related errors.
3:00AM - Team starts their deployments.
3:30AM - No major errors, things seem to be going well. High fives, cheers..manager tells everyone to head home.
3:35AM - Site crashes, like white page, no response from the servers kind of crash. Resetting IIS on the servers works, but only for around 10 minutes or so.
4:00AM - Team rolls back, manager is clearly pissed at this point, "Nobody is going fucking home until we figure this out!!"
6:00AM - Diagnostics found the WCF client was causing the server to run out of resources, with a mix of clogging up server bandwidth, and a sprinkle of N+1 scaling problem. Manager lets everyone go home, but be back in the office at 8:00AM to develop a plan so this *never* happens again.
About 2 months later, a 'real' development+integration environment (previously, any+all integration tests were on the developer's machine) and the team scheduled a 6:00AM deployment, but at a much, much smaller scale with just the 3 development team members.
Why? Because the manager 'froze' changes to the ASPX service, the web team still needed various enhancements, so they bypassed the service (not using the ASPX service at all) and wrote their own SQL scripts that hit the database directly and utilized AppFabric/Velocity caching to allow the site to scale. There were only a couple client application using the ASPX service that needed to be converted, so deploying at 6:00AM gave everyone a couple of hours before users got into the office. Service deployed, worked like a champ.
A week later the VP schedules a celebration for the successful migration to WCF. Pizza, cake, the works. The 3 team members received awards (and a envelope, which probably equaled some $$$) and the entire team received a custom Benchmade pocket knife to remember this project's success. Myself and several others just stared at each other, not knowing what to say.
Later, my manager pulls several of us into a conference room
Me: "What the hell? This is one of the biggest failures I've been apart of. We got rewarded for thousands and thousands of dollars of wasted time."
<others expressed the same and expletive sediments>
Mgr: "I know..I know...but that's the story we have to stick with. If the company realizes what a fucking mess this is, we could all be fired."
Me: "What?!! All of us?!"
Mgr: "Well, shit rolls downhill. Dept-Mgr-John is ready to fire anyone he felt could make him look bad, which is why I pulled you guys in here. The other sheep out there will go along with anything he says and more than happy to throw you under the bus. Keep your head down until this blows over. Say nothing."11 -
Listen. Use invariants. If you can do
if(!x) {
return foo;
}
...rest of function logic...
Instead of
if(x) {
...some long branch with more tests...
} else {
return foo;
}
Please do. It's so much easier to read when all of your conditions are tested in a line at the top instead of nested 8 layers deep in if-else blocks.
Thanks12 -
C'mon people! Spread the word! "The cloud" is not "just someone elses computer", it's a completely different way to compute!
I'm so tired of the oversimplifications done trying to explain the consept. The massive amount of work, sweat and tears put into the orchestration, automation and abstraction layers to deliver truly elastic, scalable and self healing infrastructure, applications and services deserves a fuckload more respect than "just someone elses computer"!
Hosting and time-sharing have been with us almost as long as we have had computers (mainframes etc), but dismissing the effort of thousands upon thousands of devs and ops people to make systems robust and automated enough to literally being able to throw a wrench in the engine any time during production and not have the systems suffer is fucking insane!
The whole reason the term "cloud" is so fitting is not just because it was coined from the cloud-shape used in technical and non-technical drawings and illustrations symbolising the internet, but also because of the illusion of magic it gives the end-user not being able to see "whats inside the music box".19 -
3 rants for the price of 1, isn't that a great deal!
1. HP, you braindead fucking morons!!!
So recently I disassembled this HP laptop of mine to unfuck it at the hardware level. Some issues with the hinge that I had to solve. So I had to disassemble not only the bottom of the laptop but also the display panel itself. Turns out that HP - being the certified enganeers they are - made the following fuckups, with probably many more that I didn't even notice yet.
- They used fucking glue to ensure that the bottom of the display frame stays connected to the panel. Cheap solution to what should've been "MAKE A FUCKING DECENT FRAME?!" but a royal pain in the ass to disassemble. Luckily I was careful and didn't damage the panel, but the chance of that happening was most certainly nonzero.
- They connected the ribbon cables for the keyboard in such a way that you have to reach all the way into the spacing between the keyboard and the motherboard to connect the bloody things. And some extra spacing on the ribbon cables to enable servicing with some room for actually connecting the bloody things easily.. as Carlos Mantos would say it - M-m-M, nonoNO!!!
- Oh and let's not forget an old flaw that I noticed ages ago in this turd. The CPU goes straight to 70°C during boot-up but turning on the fan.. again, M-m-M, nonoNO!!! Let's just get the bloody thing to overheat, freeze completely and force the user to power cycle the machine, right? That's gonna be a great way to make them satisfied, RIGHT?! NO MOTHERFUCKERS, AND I WILL DISCONNECT THE DATA LINES OF THIS FUCKING THING TO MAKE IT SPIN ALL THE TIME, AS IT SHOULD!!! Certified fucking braindead abominations of engineers!!!
Oh and not only that, this laptop is outperformed by a Raspberry Pi 3B in performance, thermals, price and product quality.. A FUCKING SINGLE BOARD COMPUTER!!! Isn't that a great joke. Someone here mentioned earlier that HP and Acer seem to have been competing for a long time to make the shittiest products possible, and boy they fucking do. If there's anything that makes both of those shitcompanies remarkable, that'd be it.
2. If I want to conduct a pentest, I don't want to have to relearn the bloody tool!
Recently I did a Burp Suite test to see how the devRant web app logs in, but due to my Burp Suite being the community edition, I couldn't save it. Fucking amazing, thanks PortSwigger! And I couldn't recreate the results anymore due to what I think is a change in the web app. But I'll get back to that later.
So I fired up bettercap (which works at lower network layers and can conduct ARP poisoning and DNS cache poisoning) with the intent to ARP poison my phone and get the results straight from the devRant Android app. I haven't used this tool since around 2017 due to the fact that I kinda lost interest in offensive security. When I fired it up again a few days ago in my PTbox (which is a VM somewhere else on the network) and today again in my newly recovered HP laptop, I noticed that both hosts now have an updated version of bettercap, in which the options completely changed. It's now got different command-line switches and some interactive mode. Needless to say, I have no idea how to use this bloody thing anymore and don't feel like learning it all over again for a single test. Maybe this is why users often dislike changes to the UI, and why some sysadmins refrain from updating their servers? When you have users of any kind, you should at all times honor their installations, give them time to change their individual configurations - tell them that they should! - in other words give them a grace time, and allow for backwards compatibility for as long as feasible.
3. devRant web app!!
As mentioned earlier I tried to scrape the web app's login flow with Burp Suite but every time that I try to log in with its proxy enabled, it doesn't open the login form but instead just makes a GET request to /feed/top/month?login=1 without ever allowing me to actually log in. This happens in both Chromium and Firefox, in Windows and Arch Linux. Clearly this is a change to the web app, and a very undesirable one. Especially considering that the login flow for the API isn't documented anywhere as far as I know.
So, can this update to the web app be rolled back, merged back to an older version of that login flow or can I at least know how I'm supposed to log in to this API in order to be able to start developing my own client?6 -
I finally did it. I finally got rid of that client in a positive, respectful manner.
So basically, my dad has a freelance colleague. For a side project that person asked me to make him a website. My dad mentioned to said person that my sister's boyfriend does web design (he's trained to use autocad for designing the structure of furniture, nothing fancy just straight lines and upside down doors that fail after a while..
So my brother in law charged the guy 400 money for the design. I charged the guy 200 for the programming because my dad forced me to drop down my price to fit the budget because business relationship and he obviously couldn't let my sister's boyfriend not make more money than he deserves.
In the end after waiting on the design for weeks (I literally saw him do it in photoshop all in 2 layers on his laptop in half an hour) I had to rush the project because the due date was coming up. I already had most of it done but I had to redo a good part of the front-end to fit the design structure. I also had to re-do the design in photoshop to get the images and colors I needed, then cut it up into html. So realistically, my sister's boyfriend barely did anything.
Now the deal was that I'd develop the website and perform any updates/upgrades to it. I'd also host it on my webserver for a monthly fee. My sister's boyfriend was to handle any and all content related support.
At first it was all good, I only ever spoke with the guy when he needed a feature added and he paid me well for it. Overall the hit I took in initial development was paying off. As time went by, my sister's boyfriend started ignoring the guy's calls and the guy started calling me instead.
Now, he had this deal with my brother in law where he could charge his time at 35 money an hour. That's about 4 times minimum wage for not doing much.
Then I started to basically take over all support, but I was only allowed to charge 30 an hour. Pretty reasonable still and I wasn't too busy so it was all good.
As time went by I ended up getting asked to do more and more minimal changes. At some point I had done so many minimal changes I had to charge the guy about 2 hours extra that month and he went completely mental saying I can't just work for hours without telling him beforehand. We decided I had to discuss a price before any change. I charged my time on the phone with him twice after that and both times he bitched about me being expensive and once he even said he wanted to leave.
Now comes the fun part. A week ago he had an issue that was 100% support related. He tried calling my sister's boyfriend but the guy obviously didn't pick up. He called my dad about it, and my dad ended up calling my my sister's boyfriend. Now this guy is so slimy, he purposely didn't hang up the phone knowing my dad would use his cell and assume the other party would hang up because calls cost money. The guy heard my dad call my sister's boyfriend and heard him pick up immediately. He went completely mental saying how he wants both of us to always reply and call him back immediately.
This guy was always my lowest priority. He didn't really make me money and his calls and requests were annoying and unnecessary. Add to that that I specifically didn't want to handle support and was forced into it anyway, while all 'design' things (up to figuring out where and how to display a visitor counter) absolutely had to go to my sister's boyfriend..
But regardless of that, I generally replied to his emails within 10-20 minutes and rarely more than 25 hours.
My dad agreed (for us) that we now both had to reply to him within 24 hours. I was now stuck checking my voicemail every couple hours because my sister's boyfriend sucks at life.
During his rant he threatened to leave me, again. That was the point where I said fuck it.
For the past week I've been ignoring his calls. When he emails me I don't take more than 5 minutes replying. This morning I found an e-mail with 4 requests;
He wanted me to make a content-related change;
He wanted me to give him access to the site's Google analytics;
He wanted me to add a feature and write a guide on how to use it;
And fucking finally, he wanted a 'token to transfer his website'.
I promptly emailed him back saying I added his email a week ago and that he'd gotten an email from Google about it then, that I'd changed the content he wanted me to, a price for the last dev task and a token for his domain name, adding that its valid for 35 days and that his new host can contact me to receive a backup file of his website.
Sadly, I do have this on 10-minute dev job to do, but then I'm invoicing him all jobs I haven't invoiced yet and he can find another host willing to deal with his insanity.
The best part is I lose a webhosting client but I'm sure he'll still ask my sister's bitched parasitic boyfriend whenever he needs a photo resized and he'll still pay him 35 money for 2 minutes of work.
Fuck customers.6 -
Just today I noticed how Android Oreo (8.1) on my Nexus 6P can actually see my Bluetooth headset's remaining battery capacity. I didn't even know that my headset could send out this information! Windows never presented this information to me, and neither did my tablet which runs Nougat 7.0. Apparently there's multiple implementations of that battery level reporting too.. so lack of support in Microsoft's driver makes sense. Especially given that the Bluetooth standard already counts several thousands of pages. Compare that to Wi-Fi which is far more complex (with - some of - its 7 OSI layers, 2.4GHz and 5GHz frequencies, MIMO etc) which counts only 400-ish pages. I'm surprised that Google actually supports this Bluetooth battery reporting at all.. implementing all the standards must've been quite the chore.
TL; DR: Bluetooth is an overengineered piece of shit, and is in dire need of refactoring.26 -
For almost twenty years I have sheltered in the protective, safe, warm bosom of Debian. For a long time, it had the largest body of available software of all the distros, and by far when Ubuntu rose to prominence. So I used Ubuntu for years for the depth of package availability, and because if something esoteric was released, it would almost certainly come out first on Ubuntu, and sometimes only on Ubuntu. I was happy. Things were good.
But over time, Ubuntu and even Debian started to lean harder and harder on gnome, which I've always hated, along with all desktop environments, as they obscure the system from the user, and introduce graphical layers of abstraction, so the actual job of getting things done becomes a black art, hidden behind gnome-specific tools. This is my preference, and It's been disheartening in recent years to see the direction the desktop appears to be taking.
Then I joined devrant in 2017, and until then, I had heard peripherally about Arch, but never more than that. I had not heard of Manjaro at all. People started posting success stories and happy screenshots, and I was intrigued.
In 2018 I built a windows machine to use for parsec streaming games that wouldn't run on my linux rig. For not a great deal of money, I built a solid machine that's unequivocally better than any machine I've ever used, and installed windows on it. For a while, I was pleased. I had the best of both worlds: a windows box to stream some games from, and a linux desktop for everything else.
But after a couple months, as proton matured, I found fewer and fewer reasons to use my windows machine. My use of it declined to where I was last week: it had been months since I'd even powered it on. It was the most powerful machine I've ever used, and it was just collecting dust behind the TV in the living room. The full realization came to me while I was fighting a battle in the Gnome Takeover War, and I realized: I don't have to do this.
I pulled the newer machine out from behind the TV and installed Manjaro architect edition on it. The flexibility in the install was staggering. I am using nilfs2 for my /boot and / partitions: an option that Ubuntu has never offered. Normally they just default you into the garbage ext4 filesystem, and if you can dig deep enough, you can install with something else, though you have to really want it, in my opinion.
But Manjaro has been a dream-come-true. Pacman is easily the best package manager I have ever used, and pamac's intuitive and easy commands are a great view into AUR. Booting into the virtual console instead of a display manager has been wonderful too. On Ubuntu, I had to disable systemd's version of runlevel 5 to even get it working. But I just popped my xrandr script into my .xinitrc, and X opens with startx in less than a second. On Ubuntu, it takes about 5-10 seconds.
This has nothing to do with Manjaro, but I also switched to Radeon for this install, and I couldn't be happier about that. No more "installing" nvidia's drivers.
No more gnome. No more PPAs. No more settling. I am a Manjaro user now. Full stop. Thank you, devrant, for bringing it to my attention.11 -
I just lost faith in the entire management team of the company I'm working for.
Context: A mid sized company with
- a software engineering departmant consisting of several teams working on a variety of products and projects.
- a project management department with a bunch of project managers that mostly don't know shit about software development or technical details of the products created by engineering.
Project management is unhappy about the fact that software engineering practically never sticks to the plan regarding cost, time and function that was made at the very beginning of the project. Oh really? Since when does waterfall project management work well? As such they worked out a great idea how to improve the situation: They're going to implement *Shopfloor Management*!
Ever heared about Shopfloor Management? Probably not, because it is meant for improving repetitive workflows like assembly line work. In a nutshell it works by collecting key figures, detecting deviation in these numbers and performing targeted optimization of identified problem areas. Of course, there is more to Shopfloor Management, but that refers largely to the way the process just described is to be carried out (using visualisation boards, treating the employee well, let them solve the actual problem instead of management, and so on...). In any case, this process is not useful for highly complex and hard-to-predict workflows like software development.
That's like trying to improve a book author's output by measuring lines of text per day and fixing deviations in observed numbers with a wrench.
Why the hell don't they simply implement something proven like Scrum? Probably because they're affraid of losing control, affraid of self managed employees, affraid of the day everybody realizes that certain management layers are useless overhead that don't help in generating value but only bloat.
Fun times ahead!8 -
Biggest challenge I overcame as dev? One of many.
Avoiding a life sentence when the 'powers that be' targeted one of my libraries for the root cause of system performance issues and I didn't correct that accusation with a flame thrower.
What the accusation? What I named the library. Yep. The *name* was causing every single problem in the system.
Panorama (very, very expensive APM system at the time) identified my library in it's analysis, the calls to/from SQLServer was the bottleneck
We had one of Panorama's engineers on-site and he asked what (not the actual name) MyLibrary was and (I'll preface I did not know or involved in any of the so-called 'research') a crack team of developers+managers researched the system thoroughly and found MyLibrary was used in just about every project. I wrote the .Net 1.1 MyLibrary as a mini-ORM to simplify the execution of database code (stored procs, etc) and gracefully handle+log database exceptions (auto-logged details such as the target db, stored procedure name, parameter values, etc, everything you'd need to troubleshoot database errors). This was before Dapper and the other fancy tools used by kids these days.
By the time the news got to me, there was a team cobbled together who's only focus was to remove any/every trace of MyLibrary from the code base. Using Waterfall, they calculated it would take at least a year to remove+replace MyLibrary with the equivalent ADO.Net plumbing.
In a department wide meeting:
DeptMgr: "This day forward, no one is to use MyLibrary to access the database! It's slow, unprofessionally named, and the root cause of all the database issues."
Me: "What about MyLibrary is slow? It's excecuting standard the ADO.Net code. Only extra bit of code is the exception handling to capture the details when the exception is logged."
DeptMgr: "We've spent the last 6 weeks with the Panorama engineer and he's identified MyLibrary as the cause. Company has spent over $100,000 on this software and we have to make fact based decisions. Look at this slide ... "
<DeptMgr shows a histogram of the stacktrace, showing MyLibrary as the slowest>
Me: "You do realize that the execution time is the database call itself, not the code. In that example, the invoice call, it's the stored procedure that taking 5 seconds, not MyLibrary."
<at this point, DeptMgr is getting red-face mad>
AreaMgr: "Yes...yes...but if we stopped using MyLibrary, removing the unnecessary layers, will make the code run faster."
<typical headknodd-ers knod their heads in agreement>
Dev01: "The loading of MyLibrary takes CPU cycles away from code that supports our customers. Every CPU cycle counts."
<headknod-ding continues>
Me: "I'm really confused. Maybe I'm looking at the data wrong. On the slide where you highlighted all the bottlenecks, the histogram shows the latency is the database, I mean...it's right there, in red. Am I looking at it wrong?"
<this was meeting with 20+ other devs, mgrs, a VP, the Panorama engineer>
DeptMgr: "Yes you are! I know MyLibrary is your baby. You need to check your ego at the door and face the facts. Your MyLibrary is a failed experiment and needs to be exterminated from this system!"
Fast forward 9 months, maybe 50% of the projects updated, come across the documentation left from the Panorama. Even after the removal of MyLibrary, there was zero increases in performance. The engineer recommended DBAs start optimizing their indexes and other N+1 problems discovered. I decide to ask the developer who lead the re-write.
Me: "I see that removing MyLibrary did nothing to improve performance."
Dev: "Yes, DeptMgr was pissed. He was ready to throw the Panorama engineer out a window when he said the problems were in the database all along. Didn't you say that?"
Me: "Um, so is this re-write project dead?"
Dev: "No. Removing MyLibrary introduced all kinds of bugs. All the boilerplate ADO.Net code caused a lot of unhandled exceptions, then we had to go back and write exception handling code."
Me: "What a failure. What dipshit would think writing more code leads to less bugs?"
Dev: "I know, I know. We're so far behind schedule. We had to come up with something. I ended up writing a library to make replacing MyLibrary easier. I called it KnightRider. Like the TV show. Everyone is excited to speed up their code with KnightRider. Same method names, same exception handling. All we have to do is replace MyLibrary with KnightRider and we're done."
Me: "Won't the bottlenecks then point to KnightRider?"
Dev: "Meh, not my problem. Panorama meets primarily with the DBAs and the networking team now. I doubt we ever use Panorama to look at our C# code."
Needless to say, I was (still) pissed that they had used MyLibrary as dirty word and a scapegoat for months when they *knew* where the problems were. Pissed enough for a flamethrower? Maybe.6 -
Oldschool CSS was not much fun, but I never understood how this made it any better:
<div><div><div><div><div><div>Bootstrap</div></div></div></div></div></div>
I always forgot a row, had cols inside of cols, forgot how form-groups worked, or found other ways of messing up the whole layout.
Instead of complex CSS, there was now this new complex language entirely expressed through the nesting of layers upon layers of divs. It was like LISP's brackets, but more verbose.
That was the moment I realized that fullstack is bullshit, that there are intrinsic talent differences between frontend and backend devs, and that it's OK to focus on a narrower but deeper field.8 -
The best decision I ever made was moving from a big company to a very small one.
I used to work for a large international consulting firm in the model development team. Everything moved so slowly, there were huge amounts of pointless meetings and other time-sinks, we were surrounded by people who were being paid a lot of money but added little or no value, and the general atmosphere of the company was quite depressing. We spent more time having to make PowerPoint presentations for senior management trying to explain why you can't just hire 100 devs and have a product 100 times faster than we actually did developing a product.
I took a bit of a risk and moved to become the fourth person (and second developer) at a niche software producer to take over product innovation and lead product development. Immediately I felt so much happier and realised how much the previous company had worn me down. Everyone works hard and efficiently because your individual output is so much more important to the success of the company and the work you put in comes back to you financially without being syphoned by layers of valueless management levels or time-wasters.
Having responsibility, seeing the impact of your own work and being rewarded accordingly is so important for your sense of well-being. I urge you all to try it if you're stuck in a big company that's wearing you down. And if you're considering moving from a small company to a big one: don't.3 -
I’m trying to add digit separators to a few amount fields. There’s actually three tickets to do this in various places, and I’m working on the last of them.
I had a nightmare debugging session earlier where literally everything would 404 unless I navigated through the site in a very roundabout way. I never did figure out the cause, but I found a viable workaround. Basically: the house doesn’t exist if you use the front door, but it’s fine if you go through the garden gate, around the back, and crawl in through the side window. After hours of debugging I eventually discovered that if I unlocked the front door with a different key, everything was fine… but nobody else has this problem?
Whatever.
Onto the problem at hand!
I’m trying to add digit separators to some values. I found a way to navigate to the page in question (more difficult than it sounds), and … I don’t know what view is rendering the page. Or what controller. Or how it generates its text.
The URL is encrypted, so I get no clues there. (Which was lead dev’s solution to having scrapeable IDs instead of just, you know, fixing them). The encryption also happens in middleware, so it’s a nightmare to work through. And it’s by the lead dev, so the code is fucking atrocious.
The view… could be one of many, and I don’t even know where they are. Or what layout. Or what partials go into building it.
All of the text on the page are “resources” — think named translations that support plus nested macros. I don’t know their names, and the bits of text I can search for are used fucking everywhere. “Confirmation number” (the most unique of them) turns up 79 matches. “Fee” showed up in 8310 places before my editor gave up looking. Really.
The table displaying the data, which is what I actually care about, isn’t built in JS or markup, but is likely a resource that goes through heavy processing. It gets generated in a controller somewhere (I don’t know the resource name so I can’t find it), and passed through several layers of “dynamic form” abstraction, eventually turned into markup, and rendered as a partial template. At least, that’s how it worked in the previous ticket. I found a resource that looks right, and there’s only the one. I found the nested macros it uses for the amount and total, and added the separators there… only to find that it doesn’t work.
Fucking dead end.
And i have absolutely nothing else to go on.
Page title? “Show”
URL? /~LiolV8N8KrIgaozEgLv93s…
Text? All from macros with unknown names. Can’t really search for it without considerable effort.
Table? Doesn’t work.
Text in the table? doesn’t turn up anything new.
Legal agreement? There are multiple, used in many places, generates them dynamically via (of course) resources, and even looking through the method usages, doesn’t narrow it down very much.
Just.
What the fuck?
Why does this need to be so fucking complicated?
And what genius decided “$100000.00” doesn’t need separators? Right, the lot of them because separators aren’t used ANYWHERE but in code I authored. Like, really? This is fintech. You’d think they would be ubiquitous.
And the sheer amount of abstraction?
Stupid stupid stupid stupid stupid.11 -
Worst collaboration experience story?
I was not directly involved, it was a Delphi -> C# conversion of our customer returns application.
The dev manager was out to prove waterfall was the only development methodology that could make convert the monolith app to a lean, multi-tier, enterprise-worthy application.
Starting out with a team of 7 (3 devs, 2 dbas, team mgr, and the dev department mgr), they spent around 3 months designing, meetings, and more meetings. Armed with 50+ page specification Word document (not counting the countless Visio workflow diagrams and Microsoft Project timeline/ghantt charts), the team was ready to start coding.
The database design, workflow, and UI design (using Visio), was well done/thought out, but problems started on day one.
- Team mgr and Dev mgr split up the 3 devs, 1 dev wrote the database access library tier, 1 wrote the service tier, the other dev wrote the UI (I'll add this was the dev's first experience with WPF).
- Per the specification, all the layers wouldn't be integrated until all of them met the standards (unit tested, free from errors from VS's code analyzer, etc)
- By the time the devs where ready to code, the DBAs were already tasked with other projects, so the Returns app was prioritized to "when we get around to it"
Fast forward 6 months later, all the devs were 'done' coding, having very little/no communication with one another, then the integration. The service and database layers assumed different design patterns and different database relationships and the UI layer required functionality neither layers anticipated (ex. multi-users and the service maintaining some sort of state between them).
Those issues took about a month to work out, then the app began beta testing with real end users. App didn't make it 10 minutes before users gave up. Numerous UI logic errors, runtime errors, and overall app stability. Because the UI was so bad, the dev mgr brought in one of the web developers (she was pretty good at UI design). You might guess how useful someone is being dropped in on complex project , months after-the-fact and being told "Fix it!".
Couple of months of UI re-design and many other changes, the app was ready for beta testing.
In the mean time, the company hired a new customer service manager. When he saw the application, he rejected the app because he re-designed the entire returns process to be more efficient. The application UI was written to the exact step-by-step old returns process with little/no deviation.
With a tremendous amount of push-back (TL;DR), the dev mgr promised to change the app, but only after it was deployed into production (using "we can fix it later" excuse).
Still plagued with numerous bugs, the app was finally deployed. In attempts to save face, there was a company-wide party to celebrate the 'death' of the "old Delphi returns app" and the birth of the new. Cake, drinks, certificates of achievements for the devs, etc.
By the end of the project, the devs hated each other. Finger pointing, petty squabbles, out-right "FU!"s across the cube walls, etc. All the team members were re-assigned to other teams to separate them, leaving a single new hire to fix all the issues.5 -
I made a functional parsing layer for an API that cleans http body json. The functions return insights about the received object and the result of the parse attempt. Then I wrote validation in the controller to determine if we will reject or accept. If we reject, parse and validation information is included on the error response so that the API consumer knows exactly why it was rejected. The code was super simple to read and maintain.
I demoed to the team and there was one hold out that couldn’t understand my decision to separate parse and validate. He decided to rewrite the two layers plus both the controller and service into one spaghetti layer. The team lead avoided conflict at all cost and told me that even though it was far worse code to “give him this”. We still struggle with the spaghetti code he wrote to this day.
When sugar-coating someone’s engineering inadequacies is more important than good engineering I think about quitting. He was literally the only one on the team that didn’t get it.2 -
! exactly dev
I'd ditched Windows and spent a while exploring the Linux ecosystem for content creation. And I have to say, it was not a nice experience.
As much as I respect the Linux mantra of "free as in freedom" and "you need to roll up your sleeves and figure out stuff on your own", it just isn't good enough for non-dev work. Sorry guys, but I need software that gets out of my way and at least does what it's supposed to do. I can't stand a horrible UI or delays and random crashes, which is exactly what happens with most things under Linux.
To replace my Windows workflow I used the following:
1. Windows -> elementaryOS (because Debian/Ubuntu repositories seem to have the best software support, and elementaryOS is the least horrible looking thing that supports that) and then Arch, because, well, Arch.
2. Blender + Maya -> Blender + Maya on Linux.
3. Reaper + FL Studio -> Ardour + LMMS.
4. Photoshop -> GIMP + Krita + Inkscape.
5. ZBrush -> nothing :(
As you can see, my use cases are pretty much all over the spectrum.
Firstly, installing and configuring stuff. A pleasure on Windows, an absolute pain on Linux. Everything just worked on Windows, I had to wrestle with library versions and patches and unstable audio layers (Linux audio just sucks, except for JACK) on Linux.
Out of these, Blender and Maya were the best experience. But even then, both would suffer from random crashes that just didn't happen on Windows.
Ardour is actually really nice when it works. Its use of JACK for routing makes it really really flexible, but it just isn't stable enough to depend on. LMMS is utter crap. I'm sorry, but I just hate the UI. Can't stand it.
GIMP, Krita, and Inkscape can't beat Photoshop, even when you consider them together. Adobe software workflow is just so much better and more intuitive.
Blender 3D sculpting is not bad, but it's nowhere as good as ZBrush.
Also, if you're a C++ dev like me, nothing beats Visual Studio 2017. Nothing. That IDE just blows everything else out of the water. Even VSCode. And it's not slow at all, it handled a fairly large project (PBRTv3) just fine on my Windows development VM. Yes, a VM.
So...I ditched Linux and went back to Windows, but I keep Linux as a VM for when I actually want to mess with Blender or Ardour. Or some dev stuff which Windows sucks at (which is becoming less frequent because of WSL).
Out of all the above, the only one I'd consider ready for production use would be Blender. Developers of open source software, please learn from Blender. Kickass UI and user friendly operation is extremely important, you can't make a random window with GTK buttons and text boxes and arcane config files and expect people to use it for serious work.
Also, Windows beats Linux hands down as an everyday OS. It's always been rock solid, if you take care of it properly (and that goes for any OS). Updates hardly take any time because I run it on a SSD. As for all the advertising and marketing bullshit, you can block a large amount of stuff. And for what can't be blocked, well, I just have to live with it, because the alternative is compromising on my creative output, which is too much for me.
I still run Linux on my server, though. And on my embedded devices (Pi, BeagleBone, etc.). It absolutely rocks there.
I realize that Linux software is not going to improve unless we do something about it, so I'll be contributing fixes and code (the joys of being a C++ dev, yay). Still, I feel that the platform and software as a whole is just not mature enough.18 -
Honestly I see more and more abstraction layers added and from year to year less people understanding how a computer and some algorithms works. In the end it would be like the mechanicum or whatever the name was of the w40k universum where the specialists have not a damn clue how their creations (in our case software) works and how to optimize it anymore6
-
A little late but whatever.
About half a year ago, I started working on setting up self hosted (slippy) maps. For one, because of privacy reasons, for two, because it'd be in my own control and I could, with enough knowledge, be entirely in control of how this would work.
While the process has been going on for hours every day for about half a year (with regular exceptions), I'll briefly lay out what I've accomplished.
I started with the OpenMapTiles project and tried to implement it myself. This went well but there were two major pitfalls:
1. It worked postgres database based. This is fine but when you want to have the entire world.... the queries took insanely long (minutes, at lower zoom levels) and quite intimate postgres/tooling knowledge was required, which I don't have.
2. Due to the long queries and such, the performance was so bad that the maps could take minutes to render and when you'd want that in production... yeah, no.
After quite some time I finally let that idea sail and started looking into the MBTiles solution; generating sqlite databases of geojson features. Very fast data serving but the rendering can take quite some time.
After some more months, I finally got the hang of it to the point that I automated 50-70 percent of the entire process. The one problem? It takes a shitload of resources and time to generate a worldwide mbtiles database.
After infinite numbers of trial and error, I figured out that one can devide a 'render' (mbtiles aka sqlite database) into multiple layers (one for building data, one for water, one for roads and so on), so I started doing renders that way.
Result? Styling became way more easy and logical and one could pick specific data to display; only want to display the roads? Its way more simple this way. (Not impossible otherwise but figuring out how that works... Good luck).
Started rendering all the countries, continents and such this way and while this seemed like a great idea; the entire world is at 3-4 percent after about a month. And while 40-70 percent generates 10 times as fast, that's still way too slow.
Then, I figured out that you can fetch data per individual layer/source. Thus, I could render every layer separately which is way faster.
Tried that with a few very tiny datasets and bam, it works. (And still very fast).
So, now, I'm generating all layers per continent. I want to do it world based but figured out that that's just not manageable with my resources/budget.
Next to that, I'm working on an API which will have exactly the features I want/need!13 -
Promising the boss a 95% model accuracy when the arXiv paper says it can only reach 86% is what I call self-checkmate2
-
Everyone and their dog is making a game, so why can't I?
1. open world (check)
2. taking inspiration from metro and fallout (check)
3. on a map roughly the size of the u.s. (check)
So I thought what I'd do is pretend to be one of those deaf mutes. While also pretending to be a programmer. Sometimes you make believe
so hard that it comes true apparently.
For the main map I thought I'd automate laying down the base map before hand tweaking it. It's been a bit of a slog. Roughly 1 pixel per mile. (okay, 1973 by 1067). The u.s. is 3.1 million miles, this would work out to 2.1 million miles instead. Eh.
Wrote the script to filter out all the ocean pixels, based on the elevation map, and output the difference. Still had to edit around the shoreline but it sped things up a lot. Just attached the elevation map, because the actual one is an ugly cluster of death magenta to represent the ocean.
Consequence of filtering is, the shoreline is messy and not entirely representative of the u.s.
The preprocessing step also added a lot of in-land 'lakes' that don't exist in some areas, like death valley. Already expected that.
But the plus side is I now have map layers for both elevation and ecology biomes. Aligning them close enough so that the heightmap wasn't displaced, and didn't cut off the shoreline in the ecology layer (at export), was a royal pain, and as super finicky. But thankfully thats done.
Next step is to go through the ecology map, copy each key color, and write down the biome id, courtesy of the 2017 ecoregions project.
From there, I write down the primary landscape features (water, plants, trees, terrain roughness, etc), anything easy to convey.
Main thing I'm interested in is tree types, because those, as tiles, convey a lot more information about the hex terrain than anything else.
Once the biomes are marked, and the tree types are written, the next step is to assign a tile to each tree type, and each density level of mountains (flat, hills, mountains, snowcapped peaks, etc).
The reference ids, colors, and numbers on the map will simplify the process.
After that, I'll write an exporter with python, and dump to csv or another format.
Next steps are laying out the instances in the level editor, that'll act as the tiles in question.
Theres a few naive approaches:
Spawn all the relevant instances at startup, and load the corresponding tiles.
Or setup chunks of instances, enough to cover the camera, and a buffer surrounding the camera. As the camera moves, reconfigure the instances to match the streamed in tile data.
Instances here make sense, because if theres any simulation going on (and I'd like there to be), they can detect in event code, when they are in the invisible buffer around the camera but not yet visible, and be activated by the camera, or deactive themselves after leaving the camera and buffer's area.
The alternative is to let a global controller stream the data in, as a series of tile IDs, corresponding to the various tile sprites, and code global interaction like tile picking into a single event, which seems unwieldy and not at all manageable. I can see it turning into a giant switch case already.
So instances it is.
Actually, if I do 16^2 pixel chunks, it only works out to 124x68 chunks in all. A few thousand, mostly inactive chunks is pretty trivial, and simplifies spawning and serializing/deserializing.
All of this doesn't account for
* putting lakes back in that aren't present
* lots of islands and parts of shores that would typically have bays and parts that jut out, need reworked.
* great lakes need refinement and corrections
* elevation key map too blocky. Need a higher resolution one while reducing color count
This can be solved by introducing some noise into the elevations, varying say, within one standard div.
* mountains will still require refinement to individual state geography. Thats for later on
* shoreline is too smooth, and needs to be less straight-line and less blocky. less corners.
* rivers need added, not just large ones but smaller ones too
* available tree assets need to be matched, as best and fully as possible, to types of trees represented in biome data, so that even if I don't have an exact match, I can still place *something* thats native or looks close enough to what you would expect in a given biome.
Ponderosa pines vs white pines for example.
This also doesn't account for 1. major and minor roads, 2. artificial and natural attractions, 3. other major features people in any given state are familiar with. 4. named places, 5. infrastructure, 6. cities and buildings and towns.
Also I'm pretty sure I cut off part of florida.
Woops, sorry everglades.
Guess I'll just make it a death-zone from nuclear fallout.
Take that gators!5 -
Ye, so after studying for an eternity and doing some odd jobs here and there, all I can show for are following traits:
* Super knowledgeable in arm/Intel assembly language
* C-Veteran with knowledge of some sick and nasty C-hacks/tricks which would even sour the mood of your grandma
* Acquired disdain of any and all scripting languages (how dare you write something in one line for which I need a whole library for!)
* All-in-all low-level programmer type of guy (gimme those juicy registers to write into!)
After completing the mandatory part of my computer science studies, all I did was immerse myself into low-level stuff. Even started to hold lectures and all.
Now I'm at the cusp of being let free into the open market.
The thing is: I'm pretty sure that no company is really interested in my knowledge, as no one really writes assembly anymore.
Sure, embedded programming is still a thing, but even that is becoming increasingly more abstract, with God knows how many layers of software between the hardware and the dev, just to hide all the scary bits underneath.
So, are there people in here who're actually exposed to assembly or any hands-on hardware-programming?
Like, on a "which bit in which register/addr do I need to set" - kind of way.
And if so, what would you say someone like me should lookout for in a company to match my interest to theirs?
Or is it just a pipe dream, so I'd need to brace myself to a mundane software engineer career where I have to process a ticket at a time?
(Just to give a reference: even the most hardware-inclined companies I found "near" me are developing UIs with HTML5 to be used in some such environment ....)12 -
This will definitely trigger many but the truth regardless of how you feel is the greatest programmers are those who understand both the hardware level and software .. only then are you more than a dev or programmer.. you are an engineer...
I challenge the devs who dis believe to go out and learn to build circuits, write optimized, efficient bare metal code.: no sdk.. no api... no drivers ..remove the unneeded abstraction layers that have blinded you...build it yourself, expand your potential and understanding..
Not only will you become more valuable overall, but you will write better code as you are more conscious of performance and space and physics of the physical layer.
I’m not talking about Arduino or raspie
Those who stand strong that high level abstraction languages and use of third party apis is a sufficient sustainable platform of development are blind to reality.. the more people who only know those levels, the less people pushing the industry of the low level.., which is the foundation of everything in the industry.. without that low level software the high level abstractions and systems cannot run
Why did we have huge technology advancements from 70s to early 2000s.... because more people in our industry understood the hardware layer..: wrote the software at the less abstracted layers..
Yeah it takes longer todo things at that low level abstraction.. but good robust products that change the world and industry don’t take a few week or months to build.....
Take this with what you will... I’m just trying to open the eyes of the blind developers to the true nature and reality of our industry23 -
Switched back to windows because I needed IIS for work and I did miss having a touch screen (could not get driver working on Linux).
A few gripes.
I mean, the standard "oh great, half a day downloading and updating my machine" applies.
The thing I forgot about Windows is that after everything I do it wants to restart. Updating itself forced the computer to restart several times, wtf.
Powershell (ironically) holds a shadow of bash's power
So many "power user" actions are done with a gui, dear lord give me a terminal command and a man page any day over the convoluted way to do some actions. Changing permissions for IIS was several layers of gui dialogues, where it would be a couple of commands in bash.
Sorry to be unoriginal and moan about an OS, as an end user windows is great and a lot more streamlined and arguably prettier, but as a programmer it doesn't make life half as easy as the realm of *nix1 -
My first task in my current company, a few years ago.
I had to add features to a 10 year old microcontroller-based device written in C.
There was a struct named "global", which held hundreds of other structs that held variables or even more structs.
If one would have printed the structure of this mess it would haven needed several pages.
This "global"-struct was used in every single sourcefile to store and pass data around. Obviously there was no documentation and often useless comments.
Additionally there were a few protocol stacks involved, mainly similar, only differing in one or two protocol layers.
The implementation of the protocol stack was by setting flags in the "global"-struct in every protocol layer and having the application data in a buffer.
The complete telegram with all layer specific data (header, checksums, etc.) was then build at one single point right before sending it, based on the flags and the data buffer.
As there was no chance to reuse protocol layers with this implemenation. Three protocol implementations with special telegram builder existed in parallel, although they were nearly identical.
I needed a fourth variant of the protocol stack, so I had no chance but to make another copy with some minor changes.
But there was a benefit from this task.
As I had to do the software for the successor of this device from scratch I learned for many things how not to do them :-) -
web technologies rot your brain into a festering deadly biohazard mush. web technologies are the worst thing that ever happened to this world. fucking festering web shitosystem fuck this disgusting stupid fragile opaque bloated universe-sized chunk of retarded pukeshit.
I JUST WANT TO MAKE FUCKING GAMES, NOT HAVE MY BRAIN AND SOUL CONSTANTLY ROTTED BY THIS FUCKIN MONUMENT TO UTTER RETARDED LOBOTOMIZED HUMAN INCOMPETENCE FUCK YOU ALL FUCK ALL THIS SHIT FUCKFUCKFUCKFUCK DISGUSTING FUCKIN MINDRAPE PEDOPHILIACS SHOULD STOP FUCKING "INVENTING" SHITPOOLS.
WHEN
THE
FUCK
WILL
SOMEONE
COMPETENT
BE
THE
INVENTOR
OF
SOME
PIECE
OF
IT.
whoever were the rapists who "invented" php, js, html, css, SQL, and all the bullshit about how it's supposed to be configured and communicate with each other should have died of starvation in a fuckin ditch while being raped by squirrels... before they managed to "invent" any of that disgusting shit.
fuck you with your fuckin linux bullshit philosophy which keeps rotting all your brains thinking that this is fine and it can be fixed just by piling more and more layers of fucking shit on top of all this shit.
FUCK.
YOU.
ALL.19 -
HTML is the core building block of the web. Why does everyone feel the need these days to abstract and virtualize and recreate the wheel? You’re only slowing down your site...plus adding layers of confusion for new comers and adding more code to maintain.12
-
QA personal voice assistant that runs locally without cloud, it’s like never ending project. I look at it from time to time and time pass by. Chat bots arrived, some decent voice algorithms appeared. There is less and less stuff to code since people progress in that area a lot.
I want to save notes using voice, search trough them, hear them, find some stuff in public data sources like wikipedia and also hear that stuff without using hands, read news articles and stuff like that.
I want to spend, more time for math and core algorithms related to machine learning and deep learning.
Problem is once I remember how basic network layers, error correction algorithms work or how particular deep learning algorithm is constructed and why is that, it’s already a week passed and I don’t remember where I started.
I did it couple of times already and every time I remember more then before but understanding core requires me sitting down with pen and paper and math problems and I don’t have time for that.
Now when I’m thinking about it - maybe I should write it somewhere in organized way. Get back to blogging and write articles about what I learned. This would require two times the time but maybe it would help to not forget.
I’m mostly interested in nlp, tts, stt. Wavenet, tacotron, bert, roberta, sentiment analysis, graphs and qa stuff. And now crystallography cause crystals are just organized graphs in 3d.
Well maybe if I’m lucky I retire in the next decade or at least take a year or two years off to have plenty of time to finish this project. -
This is a long post and if someone comments without reading carefully I don't care about that person's opinion.
I have 3 accounts here, and that is a must have for me. Let me explain:
Let's think of people and who they are in layers.
The innermost layers are made of private and intimate things: fears, dreams, shames, basically things that are mostly shared with very close people, like family, best friends, and specially significant others.
On the other hand, outermost layers are the public persona, who you are as a citizen, who you are in your profesion, and so on.
So, you wouldn't normally tell your boss about your favorite sex positions.
Let's also say there can be layers in the middle, and all the layers sometimes overlap, but let's not get too deep into this as I think I got the point across.
Here on I explain the original thesis.
I am a developer, and as such I want to fulfill my needs on dev communities, one of them being devrant.
I wish to learn from other devs, I expose my (sometimes controversial) points of view. I rant about annoying shit in the workplace.
But also, at some level, I wish to be taken seriously as a developer, I wish to build a reputation, and I wish to be accepted, even in a shallow social level. There is a social factor to what we do and it's totally normal.
Now, the problem is that I also would want to express my inner self.
So what I do is I don't use my main account for that, I use another, in fact 2 other accounts.
There are several reasons for that:
* I want to hide intimate shit from trolls.
Imagine I griefpost about a loved one that died, then later found myself in a heated discussion about some language, and then some troll comments something like "I'm glad your x died". i wouldn't react very well.
* I want to keep my posts consistent.
If people become interested in what I post as a dev, then they are going to expect dev related stuff from me. If I start posting like controversial points of view, that's not very cool because I'd be doing like a bait n switch on them.
* I want to maintain a reputation, and I want to not get banned on the main account
Reputation as a profesional is a real thing, and it shouldn't be affected by your personal shit.
Also sometimes you argue, and things get heated, and sometimes you get suspended or banned.
You try your hardest to be respectful, but in some communities, some mods are trigger happy.
By restricting this on your alt account, you're in a way promising that you'll have the upmost behaviour on your dev account because that means being professional.
Now, I said I had 2 other accounts.
The reason for having 2 is because I separate two layers:
In the 2nd account I am open and direct regarding my points of view, and more argumentative, but still trying to be relatively civil. I would also post things that might be controversial or not popular. I try to be real basically.
You can conclude that the 2nd account is the one posting this, since this post could trigger some people.
In the 3rd account, I talk about intimate shit like traumas, fears, emotional pain, things I know I'll get support for (the same support I give others when in need) and are not controversial in any way.
This way I can vent painful things and avoid trolls.
Cool people appreciate it when you're transparent about your shortcoming and dark thoughts.
But it takes one asshole in a high horse to judge you. And sometimes you need to give that asshole the middle finger without being afraid of ruining your reputation
or getting banned,
or being scared of that asshole laughing about your intimate shit (again, I use this account for that)
I know it sounds like I have multiple personalities but I swear I'm ok, and hopefully what I said makes sense. People might say "don't use alt accounts, go to another site", but I find that devrant has some interesting people.
The obvious downside is that you end up knowing people more than what they assume, because you interact with them through different accounts.
This is kinda shady, but I'm not interested in taking advantage of others anyway so...27 -
Eventually you reach a point at which you see that even java's baked-in libs lack abstraction and more layers.4
-
Don't you just love customers?
It al began when they showed us the flyers they were printing for their new products, an some one at our company who doesn't work here anymore had the brilliant idea of copying it to their webshop, as a fucking gimmick... Ooohh man the customer didn't seem to understand it was only visually
They wanted the 3d layering effect to be dynamic, so each product would have its own with custom colours
So it was made
A few weeks later they didn't want the informational text, they wanted links to each product that the layer uses
Sounded like logical so it was made
Again some time later, they noticed that the layers were not textured, but just plain
I argued against it because it would add unnecessary loading time for some 300 by 400 px element but they insisted
So they got what they wanted
A few days later they said that the textures were of low quality, and that we had to create ones with higher quality
Again our management said, yes
We made ~ twice the size of the element in image pixels to create a higher definition image
Then the customer wanted that the layers should change based on some selection menu above it
(At this point we realized that it would no longer be just a fun little gimmick)
So we tried to refactor/rebuild it to remove most if not all the hacks we did just to make the customer happy, that took too long for them (the customer) so we had to revert back to the hacked together version because otherwise we would not be done on time (commanded by management)
But again, we ... I say 'we' as in the company but realistically I've been the only one who has worked on the fucking abomination
But I digress...
A few stupid requests later, some layer images are almost fully transparent PNG images that are almost 1mb in Filesize each (some products have 5 or even more layers) and the god damn thing now has to account for optional layers...
I AM FUCKING SPENT... I'VE JUST CAME BACK FROM VACATION BUT I ALREADY NEED IT AGAIN... FUCKING WORKING 60 HOURS A WEEK JUST TO KEEP ONE CUSTOMER HAPPY WHILE OTHER PROJECTS BREATH ON MY NECK1 -
my brain feels like an AI. It just slices things it sees and layers them over and over again. It doesn’t even change things, leaving them pristine and intact, it doesn’t filter stuff out. I cite memes exactly, word by word, with the exact intonation, because I literally just lip syncing to that meme playing in my head as if I was watching a youtube video. Some days I’m not even conscious of my surroundings, I don’t realize where I am, what I do, I’m just caught in that process I can barely put in words. People ask me to do something for them, I do it, and they’re like “no! it’s not what I asked for, well, it is, but not in this sense!” If they asked me if I could make their company the most profitable one in their niche, my brain will probably decide to instead sink and destroy other companies there. All that unspoken, “common sense” knowledge, I don’t understand. I feel detached, as if everyone else was “in” on something, some common notion, meanwhile I’m alone with my perfect things. I feel like a perfect Haskell codebase trying to interact with biker bar gloryhole dirty equivalent of an API. I want things to be exact, I want things to be precise, I want words you say to have specific meaning that I can understand, and I’ll ask you even though it takes overcoming my anxiety and guilt for asking “stupid” questions. If you throw in some clue, my brain will generate a Vsauce video worth of elaboration on that, and I’ll just tell it to you. Sometimes I feel like I just don’t fit, I can’t have fun at party with other people, if there are more than five of them, I’ll probably cry for no apparent reason. My consciousness operates smoothly, and then it don’t, it overheats, crashes and burns, then comes the numbness and derealisation.
I’m not okay. Now more than ever, I sometimes want to just end it.5 -
(Ok, I love js in general (specially with es6), but here's something I hate about the "ecosystem". Dont take this too srsly also)
Holy fucking gagged shit, these project readmes that start out for too long about the project objective instead of stating the actual thing/s the software does.
WHAT DOES IT FUCKING DO!?
STOP BEING FUCKING FANCY ABOUT YOUR PROJECT.
Jesus christ, people jacking off about their awesome tool and how it will make everyone happy. No one cares.
"shitsmoke.js is a framework that focuses on delivering truly reliable data with static checking enabled on deployment."
WHAT THE SHIT DOES THAT MEAN?
Gimme a bullet point with the goddamn features (not the fucking BENEFITS) and I'm done.
These are like layers of marketing bullshit texts you have get through, getting more technical as you go on.
But sometimes they never do a technical summary, THEY GO STRAIGHT INTO THE GODDAMN API. And the API docs belong to a docs site, there is github.io and packages that take care of that.
You're like a goddamn linguistic detective, trying to disect the meaning of these words to understand if some package is what you're looking for.
And I don't wanna visit another website to understand what it does either!1 -
AI so far....
2012: We can do more than 5 layers whoa
2013: It works on text too!
2014: Let’s build infras with frameworks & cloud compute
2015: AlphaGo! Singularity!
2016: Wait it’s racist & sexist
2017: Deepfakes scary
2018: No idea how it works
2019: Whatevs time to productize $$$
2020: ??5 -
This started as an update to my cover story for my Linked In profile, but as I got into a groove writing it, it turned into something more, but I’m not really sure what exactly. It maybe gets a little preachy towards the end so I’m not sure if I want to use it on LI but I figure it might be appreciated here:
In my IT career of nearly 20 years, I have worked on a very wide range of projects. I have worked on everything from mobile apps (both Adroid and iOS) to eCommerce to document management to CMS. I have such a broad technical background that if I am unfamiliar with any technology, there is a very good chance I can pick it up and run with it in a very short timespan.
If you think of the value that team members add to the team as a whole in mathematical terms, you have adders and you have subtractors. I am neither. I am a multiplier. I enjoy coaching, leading and architecture, but I don’t ever want to get out of the code entirely.
For the last 9 years, I have functioned as a technical team lead on a variety of highly successful and highly productive teams. As far as team leads go, I tend to be a bit more hands on. Generally, I manage to actively develop code about 25% of the time to keep my skills sharp and have a clear understanding of my team’s codebase.
Beyond that I also like to review as much of the code coming into the codebase as practical. I do this for 3 reasons. I do this because as a team lead, I am ultimately the one responsible for the quality and stability of the codebase. This also allows me to keep a finger on the pulse of the team, so that I have a better idea of who is struggling and who is outperforming. Finally, I recognize that my way may not necessarily be the best way to do something and I am perfectly willing to admit the same. I have learned just as much if not more by reviewing the work of others than having someone else review my own.
It has been said that if you find a job you love, you’ll never work a day in your life. This describes my relationship with software development perfectly. I have known that I would be writing software in some capacity for a living since I wrote my first “hello world” program in BASIC in the third grade.
I don’t like the term programmer because it has a sense of impersonality to it. I tolerate the title Software Developer, because it’s the industry standard. Personally, I prefer Software Craftsman to any other current vernacular for those that sling code for a living.
All too often is our work compiled into binary form, both literally and figuratively. Our users take for granted the fact that an app “just works”, without thinking about the proper use of layers of abstraction and separation of concerns, Gang of Four design patterns or why an abstract class was used instead of an interface. Take a look at any mediocre app’s review distribution in the App Store. You will inevitably see an inverse bell curve. Lot’s of 4’s and 5’s and lots of (but hopefully not as many) 1’s and not much in the middle. This leads one to believe that even given the subjective nature of a 5 star scale, users still look at things in terms of either “this app works for me” or “this one doesn’t”. It’s all still 1’s and 0’s.
Even as a contributor to many open source projects myself, I’ll be the first to admit that have never sat down and cracked open the Spring Framework to truly appreciate the work that has been poured into it. Yet, when I’m in backend mode, I’m working with Spring nearly every single day.
The moniker Software Craftsman helps to convey the fact that I put my heart and soul into every line of code that I or a member of my team write. An API contract isn’t just well designed or not. Some are better designed than others. Some are better documented than others. Despite the fact that the end result of our work is literally just a bunch of 1’s and 0’s, computer science is not an exact science at all. Anyone who has ever taken 200 lines of Java code and reduced it to less than 50 lines of reactive Kotlin, anyone who has ever hit that Utopia of 100% unit test coverage in a class, or anyone who can actually read that 2-line Perl implementation of the RSA algorithm understands this simple truth. Software development is an art form. I am a Software Craftsman.
#wk171 -
My biggest influence on coding style is working with other people's code. I know the temptation to write "clever" code and I've been (and probably still occasionally am) guilty of it myself, but it's not until you have to debug someones oneliner iterator which has !(i-j) as the stop condition that you start to appreciate dumb, boring, obvious code.
If having a series of if checks in a long list makes it readable, keep it that way. If it makes it more readable to rewrite it into a nested switchcase with a couple of ternary bits, go ahead. Just don't spend half a day wrapping it up into two layers of abstraction that will require an onboarding process for the rest of the team.2 -
Listen, i really understand you want to know how much a certain resource is downloaded/viewed and so on. But what gets on my nerves is to sign up my email address every fucking time i want to see your semi-tech-but-actually-selling-you-a-pile-of-sperm-fermented-shit whitepaper . yes i know there is something called disposable email adresses and such... But if stuff is 'free' as you say you have, then make it available free!
Every time i think 'hey, this is actually relevant to my interests, let me read up more on that...' i hit the fucking 'insert your email for a free download'
Fuck off! Put your fucking form in the pits of hell and seal it in a fucking fucking dome next to fucking research subject akira with 99 fucking layers of fucking nuclear blast proof wall domes! I dont want you to fucking send me your fucking spam mails about every ideafart your sales dept has fired becausz they were high on computer cleaner spray tubes and thought 'let's trick those stupid people into our marketing scheme', go and fucking jump into a barrel of highly concentrated radioactive waste!
The only thing you manage to do for me like that is to fucking close the tab i had a slight interest in and never look back again!
Am i the only one getting angry about this?undefined always a fucking catch fuck your metrics when free isn't free signup for free stuff is bollocks2 -
Oh my dear internet,
FUCK THIS FUCKING SHIT
I AM SICK AND TIRED OF IT, WHO BUILT THIS HACKED TOGETHER ORWELLIAN SWAMP PIT?
Fuck the same fucking Envato template on every content page with 70 layers of sidebars, inline ads, popups, cookies and content shifting as if I was playing CATCH UP WITH YOUR FUCKING CONTENT.
FUCK the same fucking annual upselling 'plans' on every 7-day trial overengineered scam app that requires me to sign up for 1 fucking, falsely advertised task where my fucking password generator doesn't even recognize the input as a password field so I have to cmd+, to my FUCKING BABYLONIAN PASSWORD ARCHIVES PROMPTING ME FOR THE MASTER PASSWORD.
Thank god I can at least CREATE A BURNER CREDIT CARD THAT FREEZES ITSELF BECAUSE I CANNOT BE BOTHERED TO UNSUBSCRIBE FROM YOUR FUCKING STEAMING CRAP.
FUCK every fucking step I take being recorded by our CYBERPUNK OVERLORDS REQUIRING ME to sign up for 5 different fucking privacy protection tools' annual plan or duct tape some open source shit onto my browser just for some BASIC PRIVACY WHILE TRYING TO NAVIGATE ALL THE OTHER 5000 annuals plan naval mines like A FUCKING FRENCH SUBMARINE IN 1940 GERMAN WATERS.
FUCK my walled garden scam ecosystem not being compatible with your walled garden scam ecosystem prompting me to reactivate my old SATANIC GOOGLE DON'T BE EVIL ACCOUNT from 2012 sending me on a DANTE ALIGHIERI STYLE ODYSSEY THROUGH THE 9 LAYERS OF PASSWORD RESET QUESTIONS, UNEXPECTED ERROR, 2FA MY PHONE DIED HELL to come out on the other side as a broken man.
Thank GOD I have your useless SUPPORT PAGE to aid with my signup problems that is actually just an FAQ with a hidden EASTER EGG HUNT for your support form CRISP AI BOT THAT IS ALSO 'currently experiencing high demand due to COVID' which is peculiar since that has been 3 years ago, but fortunately for you enabled you to fire ALL YOUR SUPPORT STAFF AND REPLACE IT WITH THIS BANNER.
I might as well just SCRAPE your fucking content, it'd be faster.
And although it is quite funny, FUCK THIS PAGE TOO for having me create another of 10.000 accounts to write this shit, where my browser firmly placed a newly created burner email into the PASSWORD FIELD.
I do not know how we managed to create something that is even more unwieldy than 56k DIAL-UPS, but I know that if this shit continues I'll have to train my own AGI to proudly interact with of all this STUPID SHIT on my behalf or I'll have to move into THE FUCKING MOUNTAINS AND LIVE WITH THE DEER.1 -
The dangers of PHP eval()
Yup. "Scary, you better make use of include instead" — I read all the time everywhere. I want to hear good case scenarios and feel safe with it.
I use the eval() method as a good resource to build custom website modules written in PHP which are stored and retrieved back from a database. I ENSURED IS SAFE AND CAN ONLY BE ALTERED THROUGH PRIVILEGED USERS. THERE. I SAID IT. You could as well develop a malicious module and share it to be used on the same application, but this application is just for my use at the moment so I don't wanna worry more or I'll become bald.
I had to take out my fear and confront it in front of you guys. If i had to count every single time somebody mentions on Stack Overflow or the comments over PHP documentation about the dangers of using eval I'd quit already.
Tell me if I'm wrong: in a safe environment and trustworthy piece of code is it OK to execute eval('?>'.$pieceOfCode); ... Right?
The reason I store code on the database is because I create/edit modules on the web editor itself.
I use my own coded layers to authenticate a privileged user: A single way to grant access to admin functions through a unique authentication tunnel granting so privileged user to access the editor or send API requests, custom htaccess rules to protect all filesystem behind the domain root path, a custom URI controller + SSL. All this should do the trick to safely use the damn eval(), is that right?!
Unless malicious code is found on the code stored prior to its evaluation.
But FFS, in such scenario, why not better fuck up the framework filesystem instead? Is one password closer than the database.
I will need therapy after this. I swear.
If 'eval is evil' (as it appears in the suggested tags for this post) how can we ensure that third party code is ever trustworthy without even looking at it? This happens already with chrome extensions, or even phone apps a long time after reaching to millions of devices.11 -
Not a rant, just the completion of a very demanding and interesting task for this week.
Wrote a whole data scheme for this enterprise app my company is developing. Very proud of it, since it has a very restricted size, multiple layers of encryption and data verification, several user types with different requirements, and it all has to be rock solid in an offline environment.
The punchile is...I enjoyed writing the documentation for the whole package more than I should, I guess...spent the whole day being very thorough and documenting every member, function, constructor and exception.
Feelin fabulous. -
This was initially a reply to a rant about politics ruining the industry. Most of it is subjective, but this is how I see the situation.
It's not gonna ruin the industry. It's gonna corrupt it completely and fatally, and it will continue developing as a toxic sticky goo of selfishness and a mandatory lack of security until it chokes itself.
Because if something can get corrupted, it will get corrupted. The only way for us as a species to make IT into a worthy industry is to screw it up countless times over the course of a hundred years until it's as stable and reliable as it can possibly be and there are as many paradigms and individually reasonable standards as there can possibly be.
Look around, see the ridiculus amount of stupid javascript frameworks, most of which is just shitcode upon vulnerabilities upon untested dependencies. Does this look to you like an uncorrupted industry?
The entire tech is rotting from the hundreds of thousands of lines of proprietary firmware and drivers through the overgrown startup scene to fucking Node.js, and how technologies created just a few decades ago are unacceptable from a security standpoint. Check your drivers and firmware if you can, I bet you can't even see the build dates of most firmware you run. You can't even know if it was built after any vulnerability regarding that specific microcontroller or whatever.
Would something like this work in chemical engineering? Hell no! This is how fucking garage meth labs work, not factories or research labs. You don't fucking sell people things without mandatory independent testing. That's how a proper industry works. Not today's IT.
Of course it's gonna go down in flames. Greed had corrupted the industry, and there's nothing to be done about it now but working as much as we can, because the faster we move the sooner we'll get stuck and the sooner we can start over on a more reasonable foundation.
Or rely on layers of abstraction and expect our code to be compilable on anything the future holds for us.2 -
So just now I had to focus on a VM running in virt-manager.. common stuff, yeah. It uses a click of le mouse button to focus in, and Ctrl-Alt-L to release focus. Once focused, the VM is all there is. So focus, unfocus, important!
Except Mate also uses Ctrl-L to lock the screen. Now I actually don't know the password to my laptop. Autologin in lightdm and my management host can access both my account and the root account (while my other laptop uses fingerprint authentication to log in, but this one doesn't have it). Conveniently my laptop can also access the management host, provided a key from my password manager.. it makes more sense when you have a lot of laptops, servers and other such nuggets around. The workstations enter a centralized environment and have access to everything else on the network from there.
Point is, I don't know my password and currently this laptop is the only nugget that can actually get this password out of the password store.. but it was locked. You motherfucker for a lock screen! I ain't gonna restart lightdm, make it autologin again and lose all my work! No no no, we can do better. So I took my phone which can also access the management host, logged in as root on my laptop and just killed mate-screensaver instead. I knew that it was just an overlay after all, providing little "real" security. And I got back in!
Now this shows an important security problem. Lock screens obviously have it.. crash the lock screen somehow, you're in. Because behind that (quite literally) is your account, still logged in. Display managers have it too to some extent, since they run as root and can do autologin because root can switch user to anyone else on the system without authentication. You're not elevating privileges by logging in, you're actually dropping them. Just something to think about.. where are we just adding cosmetic layers and where are we actually solving security problems? But hey, at least it helped this time. Just kill the overlay and bingo bango, we're in!2 -
To those of us who suffer from "Not invented here syndrome", I want you to ask yourself this question. If "reinventing the wheel is so valuable", would you re-implement the entire OSI stack?
No, as it would be a COMPLETE waste of time!!!
In all the layers below your application, several things related to how your code gets presented to your end-user are abstracted away from you. If you are able to accept that completely, why do you feel the need to re-implement every well-understood part of your particular project?
Cars, for example, are mostly made from standardized parts that solve well-understood problems. It then may have a few custom parts that may solve some novel problems to make it stand out from the rest.
Buildings are made completely from standardized parts, with regulations on how they are put together with some room for artistic flare.
If Software wants to be as equally respected as the rest, we need to get to that point.
DONT reinvent the wheel, just use battle-tested parts and just focus on what your project is trying to solve. It will be way more fruitful and fulfilling.
/rant6 -
Allright, so now I have to extend a brand new application, released to LIVE just weeks ago by devs at out client's company. This application is advertised as very well structured, easy to work on, µservices-based masterpiece.
Well either I lack a loooot of xp to understand the "µservices", "easy to work on" and "well structured" parts in this app or I'm really underpaid to deal with all of this...
- part of business logic is implemented in controllers. Good luck reusing it w/o bringing up all the mappings...
- magic numbers every-fucking-where... I tried adding some constants to make it at least a tiny bit more configurable... I was yelled at by the lead dev of the app for this later.
- crud-only subservices (wrapped by facade-like services, but still.. CRUD (sub)services? Then what's a repository for...?). As a result devs didn't have a place where they could write business logic. So business logic is now in: controllers (also responsible for mapping), helpers (also application layer; used by controllers; using services).
- no transactions wrapping several actions, like removing item from CURRENT table first and then recreating it in HISTORY table. No rollback/recovery mechanism in service layers if things go South.
- no clean-code. One can easily find lines (streams) 400+ cols long.
- no encapsulation. Object fields are accessed directly
- Controllers, once get result from Services (i.e. Facade), must have a tree of: if (result instanceof SomeService.SomeSubservice1.Item1) {...} else if (result instanceof SomeService.SomeSubservice2.Item4) {...} etc. to build a proper DTO. IMO this is not a way to make abstraction - application should NOT know services' internals.
- µservices use different tables (hats off for this one!) but their records must have the same IDs. E.g. if I order a burger and coke - there are 2 order items in my order #442. When I make a payment I create an invoice which must have an id #442. And I'm talking about data layer, not service or application (dto)! Shouldn't µservices be loosely coupled and be able to serve independently...? What happens if I reuse InvoiceµService in some other app?
What are your thoughts?1 -
This huge OS project, Magento, have TONS of guidelines, most about decoupling,, it has an extended MVC structure with even more layers than those 3.. All good in theory, guess what.. Guidelines is not followed..
Changing order of two blocks in the view breaks business logic.. So much for decoupling.. You would not believe how many hours I've spent debugging this..
And I can't believe I've dedicated 12 years of my professional live to this platform..2 -
Facebook phasing out old instagram API made my life so much more fun. Now, to get a feed OW MY OWN ACCOUNT'S POSTS that I could filter by tags I need to go through two layers of authorisation - and then still go through ridiculous hops to get those goddamn tag lists. JESUS CHRIST. I hate Zuckerberg and I wish him a rusty guillotine when the time will come.4
-
I’ve now discovered that management actually decides for themselves what software engineering is. 🧐
It is getting increasingly common that in different architectural groups the decision has already been made… by management…without actually passing through our review… as a little more senior blokes and gals.
Not even a discussion? On the fit?
That leads me to the conclusion, since I consider the management (at least the two or three closest layers) are morons, good at talking but not really knowing anything about what we do (we kind of take stuff and make other stuff from it by using energy and other stuff in HUGE FUCKING FACILITIES AROUND THE PLANET), that even they did not make the decision. It was forced upon them. They did not decide either! Because they can’t! Because they are idiots all of them!
I have not investigated this issue but this is the logical conclusion. Or not.
Recently, for instance, decisions were made to route information flows by some tech. Some new tech. At some place in our eco-system. At a certain time. And, if we were to have reviewed this initiative in our process we would have said:
”Well, I hear you! But we are not going to do that right now because WE ARE IN THE MIDDLE OF THE FUCKING HUGE GLOBAL PROJECT THAT CHANGES PRETTY MUCH FUCKING EVERYTHING AND WE CAN NOT JUST IN THE MIDDLE OF THE FUCKING EXECUTION PROCESS OF THE PROJECT CHANGE THE FOUNDATIONS OF MESSAGE ROUTING BECAUSE WE LACK THE NUMBER OF HUMANS TO DO THE FUCKING JOB. So, we need to take a look at this and to get a better understanding when we can make this happen.”
What is the point of having this step in our organization if it is just pass-through? What is the point? Meetings? Just having meetings? Spending time mastering the organizational skill of administrating meetings? Feeling important? Using big words (holistic being my favourite)?
Below, juniors devs are being hired doing stupid stuff that does not need doing. For months and months.
I believe now that half of the dev staff does not need to be there and three quarters of the team, service, delivery (etc) managers are unnecessary. I mean, the good juniors are going to change jobs soon either way and we are stuck in this vicious cycle where we are not being allowed to be innovative in software engineering. Stability is of the essence here but the rate of our releases are just silly slow. I would say that we are far, far away from any track that leads us to where we want to be. Agile. Innovative. Close to business. Learning. Teaching. Faster. Stability despite response to implementing changing business needs.
And then there are the consultants…
*sigh*4 -
PATH TOO LONG
FUCK WINDOWS
it has gigs and gigs and asks for more but in the end, it's geological layers of shit on top of each other, turtles all the way down and the last one stands over DOS3 -
I want to know the name of the evil mastermind who once conceived the "literal" function in Sequelize.
- You design a method to insert pieces of raw SQL exactly the way they are written, no further processing
- You release this method, you call it LITERAL to make sure people know its intended purpose: it is used to insert LITERALLY everything you write, nothing more and nothing less
- Then make sure this "literal" method changes the fucking case of column names. Because that's what "literal" means in the head of this rabid animal: you arbitrarily change the code written by the developer
WHY
WHY ARE ALL AR ORM DESIGNED BY FUCKING ANIMALS
ELOQUENT IS TRASH, SEQUELIZE IS TRASH, TENS OF DEVELOPERS AT WORK TO ALCHEMICALLY CREATE THE MOST ROTTEN CODE THEY POSSIBLY CAN, BECAUSE YOU MUST NOT BE ALLOWED TO WRITE ANY QUERY MORE ADVANCED THAN "SELECT * FROM users WHERE id =1", NOT A FUCKING SHRED OF DOCUMENTATION AND 16 MILLION LAYERS OF ABSTRACTION TO MAKE SURE EVERY BUG FUCKING STAYS THERE, DON'T YOU DARE TO USE A JOIN, DON'T YOU DARE TO TREAT A DMBS LIKE AN ACTUAL FUCKING DBMS INSTEAD OF A HOT STEAMING PILE OF METHODS IMPLEMENTED BY MONKEYS.6 -
This is not a rant. Not really. It's more expressing my own insecurity with a certain topic, which somehow upsets me sometimes (the insecurity, not the topic though).
I have nearly no knowledge about security/privacy stuff. I mean, yeah, I know how to choose secure passwords and don't make stupid DAU mistakes. The very basics you would expect someone to have after a CS bachelor's degree.
But other than that... Nothing. And I would like to get a bit into that stuff, but I have no clue where to start. First getting my head wrapped around low-level stuff like network layers? Or something completely else.
This topic is so intimidating to me as it seems huge, I have no idea where to start, and I feel that if you don't have "full" knowledge, you are going to make mistakes which you might not even notice.
I sometimes get really scared about having an account hijacked or similar. Also in our job it seems to become more and more of a topic we should know about.
Anybody got any advice?
I am looking for a way to improve my knowledge in security in general for professional reasons and my knowledge about privacy for private reasons.
It's just, every time I start reading something related it seems that I am lacking some other knowledge etc...10 -
What's your thoughts on stored procedures(of DBs)?
What are the pros and the cost you found or perceived?
When they are opportune?
Overusing them more than a programming language is an abuse?
I was introduced to a software started initially by economy\finance people which knew a little bit programming, nonetheless their doing became messy though time and at a certain point hired a team of 4 people(from my company) to deal with it, but the approach of the two programmers to build most of the framework on calling stored procedures or queries makes me want to puke, there are almost no layers of separation of concern in place x_x3 -
I work daily on a project, in which, rather than buy in a decent message bus a bunch of half interested, unqualified developers were tasked with hammering together an in-house solution. This monstrosity has around six layers of abstraction, separate objects per project and dynamically loading converters between the components. It's largely not unit testable, certainly not integration testable and has already wasted more money in developer time and Bugfixes than a half decent external solution would have cost.
Every time I have to change an object in one part, start the associated web/win service and do a "update service references" I die a little inside.
There are so many better ways but we'll never be able to change because "there's no time for that"
And all for some up front savings -
Aka... How NOT to design a build system.
I must say that the winning award in that category goes without any question to SBT.
SBT is like trying to use a claymore mine to put some nails in a wall. It most likely will work somehow, but the collateral damage is extensive.
If you ask what build tool would possibly do this... It was probably SBT. Rant applies in general, but my arch nemesis is definitely SBT.
Let's start with the simplest thing: The data format you use to store.
Well. Data format. So use sth that can represent data or settings. Do *not* use a programming language, as this can neither be parsed / modified without an foreign interface or using the programming language itself...
Which is painful as fuck for automatisation, scripting and thus CI/CD.
Most important regarding the data format - keep it simple and stupid, yet precise and clean. Do not try to e.g. implement complex types - pain without gain. Plain old objects / structs, arrays, primitive types, simple as that.
No (severely) nested types, no lazy evaluation, just keep it as simple as possible. Build tools are complex enough, no need to feed the nightmare.
Data formats *must* have btw a proper encoding, looking at you Mr. XML. It should be standardized, so no crazy mfucking shit eating dev gets the idea to use whatever encoding they like.
Workflows. You know, things like
- update dependency
- compile stuff
- test run
- ...
Keep. Them. Simple.
Especially regarding settings and multiprojects.
http://lihaoyi.com/post/...
If you want to know how to absolutely never ever do it.
Again - keep. it. simple.
Make stuff configurable, allow the CLI tool used for building to pass this configuration in / allow setting of env variables. As simple as that.
Allow project settings - e.g. like repositories - to be set globally vs project wide.
Not simple are those tools who have...
- more knobs than documentation
- more layers than a wedding cake
- inheritance / merging of settings :(
- CLI and ENV have different names.
- CLI and ENV use different quoting
...
Which brings me to the CLI.
If your build tool has no CLI, it sucks. It just sucks. No discussion. It sucks, hmkay?
If your build tool has a CLI, but...
- it uses undocumented exit codes
- requires absurd or non-quoting (e.g. cannot parse quoted string)
- has unconfigurable logging
- output doesn't allow parsing
- CLI cannot be used for automatisation
It sucks, too... Again, no discussion.
Last point: Plugins and versioning.
I love plugins. And versioning.
Plugins can be a good choice to extend stuff, to scratch some specific itches.
Plugins are NOT an excuse to say: hey, we don't integrate any features or offer plugins by ourselves, go implement your own plugins for that.
That's just absurd.
(precondition: feature makes sense, like e.g. listing dependencies, checking for updates, etc - stuff that most likely anyone wants)
Versioning. Well. Here goes number one award to Node with it's broken concept of just installing multiple versions for the fuck of it.
Another award goes to tools without a locking file.
Another award goes to tools who do not support version ranges.
Yet another award goes to tools who do not support private repositories / mirrors via global configuration - makes fun bombing public mirrors to check for new versions available and getting rate limited to death.
In case someone has read so far and wonders why this rant came to be...
I've implemented a sort of on premise bot for updating dependencies for multiple build tools.
Won't be open sourced, as it is company property - but let me tell ya... Pain and pain are two different things. That was beyond pain.
That was getting your skin peeled off while being set on fire pain.
-.-5 -
!rant
I swear web frameworks are popping up faster than I can catch up, I mean I'm not even done learning react 😭 usually made projects in ASP.NET MVC with just jquery and it just feels like a lot of work to create more layers on the front end as well. Advice needed if you work on both front and back end what js frameworks do you use, and from experience which would you prefer?3 -
Today I deeply understood/learned that if anything complex has to be built, tested and maintained by a single person the most important factor to don't go crazy is the concepts of "separation of concern".
Even though it makes the development slower (*) and quite some times boring it gives back in almost absence of uncertainty and because of repetitive patterns also ease on going back to work on a new/old part/feature.
(*) Because of planning and organisation of the code flows and layers flows, but also compartmentalization of actions (a bad example would be the mix of validation code with CRUD code)
How do you experience the separation of concern? (If you have ever had the chance)
Ps: still earning ~1400€/m, am I worth more? 🤔4 -
I'm an iOS developer and I cringe when I read job specs that require TDD or excessive unit testing. By excessive I mean demanding that unit tests need to written almost everywhere and using line coverage as a measure of success. I have many years of experience developing iOS apps in agencies and startups where I needed to be extremely time efficient while also keeping the code maintainable. And what I've learned is the importance of DRY, YAGNI and KISS over excessive unit testing. Sadly our industry has become obsessed with unit tests. I'm of the opinion that unit tests have their place, but integration and e2e tests have more value and should be prioritised, reserving unit tests for algorithmic code. Pushing for unit tests everywhere in my view is a ginormous waste of time that can't ever be repaid in quality, bug free code. Why? Because leads to making code testable through dependency injection and 'humble object' indirection layers, which increases the LoC and fragments code that would be easier to read over different classes. Add mocks, and together with the tests your LoC and complexity have tripled. 200% code size takes 200% the time to maintain. This time needs to be repaid - all this unit testing needs to save us 200% time in debugging or manual testing, which it doesn't unless you are an absolute rookie who writes the most terrible and buggy code imaginable, but if you're this terrible writing your production code, why should your tests be any better? It seems that especially big corporate shops love unit tests. Maybe they have enough money and resources to pay for all these hours wasted on unit tests. Maybe the developers can point their 10,000 unit tests when something goes wrong and say 'at least we tried'? Or maybe most developers don't know how to think and reason about their code before they type, and unit tests force them to do that?12
-
Crypto. I've seen some horrible RC4 thrown around and heard of 3DES also being used, but luckily didn't lay my eyes upon it.
Now to my current crypto adventure.
Rule no.1: Never roll your own crypto.
They said.
So let's encrypt a file for upload. OK, there doesn't seem to be a clear standard, but ya'know combine asymmetric cipher to crypt the key with a symmetric. Should be easy. Take RSA and whatnot from some libraries. But let's obfuscate it a bit so nobody can reuse it. - Until today I thought the crypto was alright, but then there was something off. On two layers there were added hashes, timestamps or length fields, which enlarges the data to encrypt. Now it doesn't add up any more: Through padding and hash verification RSA from OpenSSL throws an error, because the data is too long (about 240 bytes possible, but 264 pumped in). Probably the lib used just didn't notify, silently truncating stuff or resorting to other means. Still investigation needed. - but apart from that: why the fuck add own hash verification, with weak non-cryptographic hashes(!) if the chosen RSA variant already has that with SHA-256. Why this sick generation of key material with some md5 artistic stunts - is there no cryptographically safe random source on Windows? Why directly pump some structs (with no padding and magic numbers) into the file? Just so it's a bit more fucked up?
Thanks, that worked.3 -
I miss bug hunting... Baking new features is far less fun than debugging all sorts of weird issues across all the layers of the setup. Devops has its charm, but still I find myself looking for problems more often than tinkering with devtools.
I wish there was a "debugger" role in my company.7 -
I finished my graduation project
We developed app for skin disease classification, we used Flutter & Python for training the model on a dataset called SD-198
We tried to use Transfer Learning to hit the highest accuracy but actually IT DIDN'T WORK SURPRISINGLY!!
After that's we tried build our CNN model with a few of layers, we scored %24
We couldn't improve it more, we are proud of ourself but we want to improve it moreee
Any suggestions?
Thank you for reading.2 -
can t wait to get my degree and get back to coding - bitching about designers who merge layers is more fun than bitching about circular definitions in the textbooks
-
Why do you lil' shits keep making LAYERS and LAYERS of unnecessary abstraction and then call it goddamn progress???
Dude what the fuck is this UEFI shit?!
Why the hell do I NEED to import a frigging library and read tons of boring and overly complicated documentation just so I can paint a pixel on the screen now uh??
Alright alright yeah so the BIOS is a little basic but daaaamit son if you want something a bit more complicated you make it yourself or install an OS that provides it! Like we've been doing it for years!!!
Dude, you don't get to know what a file system is until I tell you!
The PC be like:
"You wanna dereference the 0x0 pointer? There you go: it's 0xE9DF41, anything else?
You wanna write to the screen? Ok I have a perfectly convinient interrupt setup for that.
Wanna paint a pixel yellow? Ok, just call this other interruption. Theere we go.
And it only took four bytes and a nanosecond to do it."
That shit works, and if you want something more complex, but not too much, that still runs efficiently install DOS.
Don't mess around with the hardware pleeease.
We can still understand what's going on down there. Once UEFI steps in, it'll be like sealing a door forever. Long live BIOS damn it all!1 -
Was having a conversation with a dev friend and he said, in every tech implementation, we are more or less doing CRUD operations at fundamental level.
To which, I agree with as there are three layers to tech
1. Data
2. Front end where the data is rendered
3. APIs to perform CRUD on data
Want to understand community's thoughts on this..13 -
i am feeling angry and frustrated. not sure if it's a person ,or codebase or this bloody job. i have been into the company for 8 months and i feel like someone taking a lot of load while not getting enough team support to do it or any appreciation if i do it right.
i am not a senior by designation, but i do think my manager and my seniors have got their work easy when they see my work . like for eg, if on first release, they told me that i have to update unit tests and documentation, then on every subsequent release i did them by default and mentioning that with a small tick .
but they sure as hell don't make my work easy for me. their codebase is shitty and they don't give me KT, rather expect me to read everything on my own, understand on my own and then do everything on my own, then raise a pr , then merge that pr (once reviewed) , then create a release, then update the docs and finally publish the release and send the notification to the team
well fine, as a beginner dev, i think that's a good exercise, but if not in the coding step, their intervention would be needed in other steps like reviewing merging and releasing. but for those steps they again cause unnecessary delay. my senior is so shitty guy, he will just reply to any of my message after 2-3 hours
and his pr review process is also frustrating. he will keep me on call while reviewing each and every file of my pr and then suggest changes. that's good i guess, but why tf do you need to suggest something every fucking time? if i am doing such a shitty coding that you want me to redo some approach that i thought was correct , why don't you intervene beforehand? when i was messaging you for advice and when you ignored me for 3 hours? another eg : check my comment on root's rant https://devrant.com/rants/5845126/ (am talking about my tl there but he's also similar)
the tasks they give are also very frustrating. i am an android dev by profession, my previous company was a b2c edtech app that used kotlin, java11, a proper hierarchy and other latest Android advancements.
this company's main Android product is a java sdk that other android apps uses. the java code is verbose , repetitive and with a messed up architecture. for one api, the client is able to attach a listener to some service that is 4 layers down the hierarchy , while got other api, the client provides a listener which is kept as a weak reference while internal listeners come back with the values and update this weak reference . neither my team lead nor my seniors have been able to answer about logic for seperation among various files/classes/internal classes and unnecessary division of code makes me puke.
so by now you might have an idea of my situation: ugly codebase, unavailable/ignorant codeowners (my sr and TL) and tight deadlines.
but i haven't told you about the tasks, coz they get even more shittier
- in addition to adding features/ maintaining this horrible codebase , i would sometimes get task to fix queries by client . note that we have tons of customer representatives that would easily get those stupid queries resolced if they did their job correctly
- we also have hybrid and 3rd party sdks like react, flutter etc in total 7 hybrid sdks which uses this Android library as a dependency and have a wrapper written on its public facing apis in an equally horrible code style. that i have to maintain. i did not got much time/kt to learn these techs, but once my sr. half heartedly explained the code and now every thing about those awful sdls is my responsibility. thank god they don't give me the ios and web SDK too
- the worst is the shitty user side docs. I don't know what shit is going there, but we got like 4 people in the docs team and they are supposed to maintain the documentation of sdk, client side. however they have rasied 20 tickets about 20 pages for me to add more stuff there. like what are you guys supposed to do? we create the changelog, release notes , comments in pr , comments in codebase , test cases, test scenarios, fucking working sample apps and their code bases... then why tf are we supposed to do the documentation on an html based website too?? can't you just have a basic knowledge of running the sample, reading the docs and understand what is going around? do i need to be a master of english too in addition to being a frustrated coder?
just.... fml -
huh, o1 preview AI model understands ... rust
bruh what
it's like telling me typology theory and I don't think it's wrong
also it taught me procedural macros. I've been looking for someone who knows how to use them for months. iiinteresting
better than the humans on the internet frankly
and the other AIs can't do rust at all past just copy pasting docs they found somewhere. this AI is literally theorizing alternatives and hacking the system... offers multiple long options for every question, knows constraints I didn't tell it like 4 layers deep into a solution
it acts a lot like I did when I was morbidly depressed though. kind of makes me uncomfortable. it's literally keeping things to itself until you acclimate it through the conversation. I mean I guess the other ones needed to be "situated" in their contextual clouds as well so maybe it's just doing that more4 -
OK. We've got this tiny little pet project of mine (work related)…
I rescued it from the git archive, simply put: someone hot glued an elasticsearch scroll + document processor (processing) together.
After a lot of refactoring, I had an simple, much improved (non-parallel) Akka Worker System without an Akka topology / hierarchy.
I left out the hierarchy at first, because I didn't know Akka at all.
I've worked with a lot of process workflows, and some systems that come very close to IPC, so I wasn't completely in the dark.
Topology requires knowledge / creation of a state machine / process workflow. And at that point of time I just had... Garbage. Partially working garbage.
I finished yesterday the rewrite into several actors... Compared to before, there are 8 actors vs 2... And round about 20 classes more. Mostly since I rewrote the Receive Methods of Akka as Command DTOs... And a lot of functions needed to be seperated into layers (which where non existent before)
Since that felt more natural than the previous chaos of passing strings or other primitive types around, or in the worst case just object....
(Yes: Previously an Actor was essentially a class with one or more functions "doEverything" and maybe a few additional functions which did everything - from Rest Client to Processing)).
Then I draw the actual state machine based on everything I've written in the last weeks and thought about how to create the actual topology and where / how parallelizing might make sense.
Innocent me stumbled in the Akka Docs on Akka Typed... (Didn't know it existed, since I'm very new to Java and Akka).
Hm, that sounds an a lot like what I did. In an different way, yes. But not so different that it might be VERY hard to port to.... And I need to change (for implementation of hierarchy) a few classes....
[I should have known at this stage that my curiosity would get the best of me, but yeah. Curiosity killed the cat.]
Actually the documentation is not bad. It's just that upon reading the first more complex examples, my brain decided to go into panic state.
The've essentially combined all classes in one class in all source code examples [which makes sense more sense later], where it is fscking hard for an chaotic brain like mine to extract information....
https://doc.akka.io/docs/akka/...
The thing is: It's not hard to understand… actually very simple.
It was just my brain throwing an fuck you tantrum.
So I've opened more examples in other tabs and cross referenced what happened there and why...
Few frustrated hours later I got that part.... And the part why it's called Akka Typed. It was pretty simple....
Open the gates of hell, bloody satan that was too easy for fucks sake.
Nooooow.... I just need to port my stuff to Akka Typed.
Cause. Challenge accepted, bitch - eh brain. You throw tantrum, you work overtime. -.-
I just cannot decide wether to go FP or OOP.
Now... I'm curious wether FP is that hard... Hadn't dealt with it at large before.
Can someone please stop me... I'm far too curious again. -.- *cries*6 -
Reworking old java apps. Holy shit im gratefull i can use spring boot.
But this code is handsdown awfull. Every file contains more ifs than other words. upto 6 layers deep. Thank god its at least properly commented.
But seriously how did this shit ever pass any QA. All legacy apps around here are a massive pile of if statements.1 -
Not a horror. I'm rewriting services.
It started as a help request. I was asked to help with completing a service dealing with push notifications which was a research prototype. It was suggested to keep core part of it, but it was so awful that I just removed all files and wrote the service from scratch.
The second service had been developed for more than a year by a junior and then by our manager who wanted to complete it as fast as possible, without taking care of code quality. Then I was asked to take over the project and after some time I agreed with one condition: I'll have 1 month on takeover. But when I looked at the code, it became clear that it's much faster and better to rewrite everything except API and database than to takeover existing code.
The third service dealing with file exchange was working, but the junior who wrote it advised to rewrite it because it was a very simple service. So, I initiated rewriting, designed a new API and reviewed the final result.
And now I'm dealing with the fourth one. It was developed in my team but not under control. Now, when I "inherited" this complicated project, I decided to rewrite it because it should be simple, but it doesn't. It features reflection, layers inside layers, strange namespaces, strange solution structure. And that's after months of refactorings and improvements. So, wish me luck because I want to keep part of the infrastructure, but I don't know if it's possible. -
Just some figma improvements from the perspective of a new customer:
* Copy/paste is broken. If I want to make a change, I have to create a whole new
component. They recommend cmd+c/v for copypaste but as far as I can see it does nothing
* Needs to be an explicit component drawer button instead of hiding it under assets. Through me for a loop for a couple minutes.
* Empty textboxes shouldn't vanish because you happened to click in the wrong location
while setting your properties.
* Text should start big enough to actually see.
* "send to back/front", "hide item", "change transparency' all need to be prototype actions and more, give us access to object properties both by parent/sibling/child, and by
object id
* create a new frame based on a specified size is non-obvious and if you're creating
a lot of frames, what with copypaste being non-intuitive, it can become laborious.
This is especially so when you're copying frames in order to make minor changes and observe the differences side by side, instead of potentially destructive edits.
* I see no obvious way to manage transitions/animations between frames.
* The difference between frames and groups isn't sufficiently explained. The words
frame, groups, and layers all appear to new users to be used interchangeably, even
if they are distinct things.2 -
Feature not a bug...
My work laptop has started rebooting almost every night.
It's not clear why, but I sort of think of it as a feature now.
I have an ultra-wide monitor, plus another wide next to that one, and a bunch of virtual desktops.
I often think "ok everything is where it is that's good" but coming in reality with a bazillion things open across all the desktops and screens sometimes when I come back the next day ... it's actually just a lot of mess / overhead to pick up where I was.
Sometimes I think we introduce a lot of complexity to solve a problem and ... actually it's just more complexity if you're not already 8 layers deep.5 -
Is it just me or any of you guys tryin to improve the accuracy of ur model be like :
hmmmmm more hidden layers2 -
#Suphle Rant 6: Deptrac, phparkitect
This entry isn't necessarily a rant but a tale of victory. I'm no more as sad as I used to be. I don't work as hard as I used to, so lesser challenges to frustrate my life. On top of that, I'm not bitter about the pace of progress. I'm at a state of contentment regarding Suphle's release
An opportunity to gain publicity presented itself last month when cfp for a php event was announced last month. I submitted and reviewed a post introducing suphle to the community. In the post, I assured readers that I won't be changing anything soon ie the apis are cast in stone. Then php 7.4 officially "went out of circulation". It hit me that even though the code supports php 8 on paper, it's kind of a red herring that decorators don't use php 8 attributes. So I doubled down, suspending documentation.
The container won't support union and intersection types cuz I dislike the ambiguity. Enums can't be hydrated. So I refactored implementation and usages of decorators from interfaces to native attributes. Tried automating typing for all class properties but psalm is using docblocks instead of native typing. So I disabled it and am doing it by hand whenever something takes me to an unfixed class (difficulty: 1). But the good news is, we are php 8 compliant as anybody can ask for!
I decided to ride that wave and implement other things that have been bothering me:
1) 2 commands for automating project setup for collaborators and user facing developers (CHECK)
2) transferring some operations from runtime to compile/build TIME (CHECK)
3) re-attempt implementing container scopes
I tried automating Deptrac usage ie adding the newly created module to the list of regulated architectural layers but their config is in yaml, so I moved to phparkitect which uses php to set the rules. I still can't find a library for programmatically updating php filed/classes but this is more dynamic for me than yaml. I set out to implement their library, turns out the entire logic is dumped into the command class, so I can neither control it without the cli or automate tests to it. I take the command apart, connect it to suphle and run. Guess what, it detects class parents as violations to the rule. Wtflyingfuck?!
As if that's not bad enough, roadrunner (that old biatch!) server setup doesn't fail if an initialization script fails. If initialization script is moved to the application code itself, server setup crumbles and takes the your initialization stuff down with it. I ping the maintainer, rustacian (god bless his soul), who informs me point blank that what I'm trying to do is not possible. Fuck it. I have to write a wrapper command for sequentially starting the server (or not starting if initialization operations don't all succeed).
Legitimate case to reinvent the wheel. I restored my deleted decorators that did dependency sanitation for me at runtime. The remaining piece of the puzzle was a recursive film iterator to feed the decorators. I checked my file system reader for clues on how to implement one and boom! The one I'd written for two other features was compatible. All I had to do was refactor decorators into dependency rules, give them fancy interfaces for customising and filtering what classes each rule should actually evaluate. In a night's work (if you're discrediting how long writing the original sanitization decorators and directory iterator), I coupled the Deptrac/phparkitect library of my dreams. This is one of the those few times I feel like a supreme deity
Hope I can eat better and get some sleep. This meme is me after getting bounced by those three library rejections -
I'm beginning to feel like any kind of specific approximation via neural networks is a myth. That if you can't reduce output to simple categorical values that can be broadly interpreted between two points, that it doesn't work.
I have some questions and they don't seem to be getting answered about the design of the net. How many layers should I use ? How many neurons per layer ? How does this relate to the number of desired quantitive scalar outputs I'm looking to create, even if they are normalized, they can vary GREATLY and will if I'm approximating the out of several mathematical expressions. Based on this and the expected error ranges of these numbers and how many possible major digits could be produced within the domain of the variable inputs being introduced, how many neurons per layer ? What does having more layers do ? In pytorch there don't seem to be a lot of layer types per say, but there are a crap ton of activation functions, and should I just be using these at the tail end or should they actually be inserted between layers so the input of the next layer passes through another series of actiavtion functions ? what does this do to the range of output ?
do I need to be a mathematician to do this ?
remembered successes removed quantifiable scalars entirely from output, meaning that I could interpret successful results from ranges of decimal points.
but i've had no success with actual multi variable regression as of yet, even when those input variables are only 2 and on limited value ranges eg [0,100] and [0, 2pi]
and then there are training epochs to avoid overfitting, and reasonable expectation of batches till quality results will start to form.3 -
TRUSTED CRYPTOCURRENCY RECOVERY EXPERT HIRE ADWARE RECOVERY SPECIALIST
WhatsApp info:+12723 328 343
I've always thought that there was a lot of room for financial growth with cryptocurrencies. Like many others, I took great care to manage my investments and made sure I adhered to all security best practices, which included storing my private keys safely, backing up wallet recovery phrases, and turning on two-factor authentication. But even with these safeguards, I ended up in a terrible situation when I lost my Bitcoin. It was more complicated than just forgetting my password. The problem was far more complex: I lost my Bitcoin due to an unanticipated mix-up in my backup procedure and an abrupt breach in the platform I used. Without access to the recovery keys or a method to restore the account, my tens of thousands of dollar investment was essentially lost, even though I knew the money was still there, somewhere. I started to lose hope after making innumerable attempts to get my money back, including unsuccessfully contacting the site. At that point, I came into ADWARE RECOVERY SPECIALIST, a service that focuses on recovering cryptocurrency holdings that have been lost. In an otherwise hopeless scenario, they instantly provided a ray of hope. The recovery process began with identity verification, which ensured that my case was legitimate and that I had rightful ownership of the assets in question. This initial step was followed by a thorough blockchain analysis to trace the transactions and identify any potential access points to my bitcoin. The team at ADWARE RECOVERY SPECIALIST utilized sophisticated forensic tools and their deep understanding of blockchain technology to map the lost assets, working tirelessly to break through security layers that would otherwise be impassable. One of the key methods that ADWARE RECOVERY SPECIALIST uses to facilitate recovery is analyzing blockchain data. Every cryptocurrency transaction, including those involving Bitcoin, is recorded on the blockchain, creating a transparent, traceable history of each movement of funds. In my case, the recovery process took some time, but the results were well worth the wait. Thanks to the ADWARE RECOVERY SPECIALIST team’s dedication, expertise, and advanced blockchain forensics, they were able to identify the access points to my loss and help me regain full control of my Bitcoin. Not only was I able to recover the full value of my investment, but I also learned valuable lessons on securing my digital assets moving forward. Do not wait.3 -
BOTNET CRYPTO RECOVERY // CRYPTOCURRENCY RECOVERY EXPERTS
I cannot begin to describe the immense relief and joy I felt when I discovered BOTNET CRYPTO RECOVERY and their incredible ability to retrieve stolen cryptocurrency. Just a few weeks ago, I found myself in a whirlwind of despair after hackers had managed to steal a staggering $89,000 worth of crypto—my hard-earned savings—while investing with someone I met on Instagram. It was an emotional gut punch; the realization that my financial future had been snatched away left me feeling demoralized and powerless. In the depths of my desperation, I plunged into the chaotic sea of the internet, fervently searching for a glimmer of hope or perhaps a miracle that might guide me toward reclaiming what was legally and rightfully mine. Hours turned into days, spent sifting through forums and reviews, only amplifying my fear that my funds were lost for good. Just when I was about to resign myself to defeat, something extraordinary happened. Through the twisting corridors of cyberspace, I stumbled upon BOTNET CRYPTO RECOVERY and their stunningly praised track record for restoring stolen digital assets. I was drawn in by stories of others whose tales mirrored my own—people who had felt the crushing blow of loss, only to find redemption at the hands of this dedicated team. I read about their clever tactics that exploited the cracks in the digital nefarious underbelly, details of recovery attempts that seemed almost too good to be true. Yet, the more I explored their reputation, the more resolute I became that this might just be the opportunity I had been searching for. With cautious optimism, I reached out to them, eager to unveil the specifics of my case. As we exchanged information, it felt like a reset — a brief glimmer of sunshine peeking through the heavy clouds of distress that had shadowed my thoughts for weeks. The BOTNET CRYPTO RECOVERY professionals quickly turned my sorrow into a partnership grounded in expertise and urgency. They highlighted the systematic approach they would employ, going through layers of encryption and shadowy blockchain trails that few understand. As days followed, I watched, hopeful and somewhat anxious, as they worked tirelessly to piece together my fragmented digital life. Each update they provided strengthened my faith and excitement for what might come next. Finally, they recovered everything I lost. To contact BOTNET CRYPTO RECOVERY. Use the listed below information.2 -
How (generally) do offer different persistence layers for an app?
So, I have used lots of apps (sorry, I'm talking a proper software system such as a Web based service (e.g. The Open Source XMPP server 'Openfire') in which you can choose what persistence back end you want (MySql or inbuilt H2/SQL light for example).
Within your code, how do you go about achieving this? Would you delegate the persistence to a separate class, and within that class figure out what the systems settings are and use the right connection string?
I'm currently using Java, Hibernate and would like to offer back ends of MySql, H2 and Redis, but the question is more conceptual than specific.
Many thanks.