Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "not enough cores"
-
The stupid stories of how I was able to break my schools network just to get better internet, as well as more ridiculous fun. XD
1st year:
It was my freshman year in college. The internet sucked really, really, really badly! Too many people were clearly using it. I had to find another way to remedy this. Upon some further research through Google I found out that one can in fact turn their computer into a router. Now what’s interesting about this network is that it only works with computers by downloading the necessary software that this network provides for you. Some weird software that actually looks through your computer and makes sure it’s ok to be added to the network. Unfortunately, routers can’t download and install that software, thus no internet… but a PC that can be changed into a router itself is a different story. I found that I can download the software check the PC and then turn on my Router feature. Viola, personal fast internet connected directly into the wall. No more sharing a single shitty router!
2nd year:
This was about the year when bitcoin mining was becoming a thing, and everyone was in on it. My shitty computer couldn’t possibly pull off mining for bitcoins. I needed something faster. How I found out that I could use my schools servers was merely an accident.
I had been installing the software on every possible PC I owned, but alas all my PC’s were just not fast enough. I decided to try it on the RDS server. It worked; the command window was pumping out coins! What I came to find out was that the RDS server had 36 cores. This thing was a beast! And it made sense that it could actually pull off mining for bitcoins. A couple nights later I signed in remotely to the RDS server. I created a macro that would continuously move my mouse around in the Remote desktop screen to keep my session alive at all times, and then I’d start my bitcoin mining operation. The following morning I wake up and my session was gone. How sad I thought. I quickly try to remote back in to see what I had collected. “Error, could not connect”. Weird… this usually never happens, maybe I did the remoting wrong. I went to my schools website to do some research on my remoting problem. It was down. In fact, everything was down… I come to find out that I had accidentally shut down the schools network because of my mining operation. I wasn’t found out, but I haven’t done any mining since then.
3rd year:
As an engineering student I found out that all engineering students get access to the school’s VPN. Cool, it is technically used to get around some wonky issues with remoting into the RDS servers. What I come to find out, after messing around with it frequently, is that I can actually use the VPN against the screwed up security on the network. Remember, how I told you that a program has to be downloaded and then one can be accepted into the network? Well, I was able to bypass all of that, simply by using the school’s VPN against itself… How dense does one have to be to not have patched that one?
4th year:
It was another programming day, and I needed access to my phones memory. Using some specially made apps I could easily connect to my phone from my computer and continue my work. But what I found out was that I could in fact travel around in the network. I discovered that I can, in fact, access my phone through the network from anywhere. What resulted was the discovery that the network scales the entirety of the school. I discovered that if I left my phone down in the engineering building and then went north to the biology building, I could still continue to access it. This seems like a very fatal flaw. My idea is to hook up a webcam to a robot and remotely controlling it from the RDS servers and having this little robot go to my classes for me.
What crazy shit have you done at your University?9 -
Current lappy got about 4GB RAM and not enough cores. I can't even run krita without it slowing down more the more I work on a file.
It would be frustrating if only I wasn't so depressed.
So yeah, due to being broke and lack of nerves, I'm gonna completely stop working on the comic for now.22 -
Can someone help me understand?
I subscribed to a nifty IT-releated magazine, and on its back, there's an ad for "Dedicated root server hosting", nothing unusual at a first glance, but after I read the issue, I decided to humor them and see what it is that they offered, and... It just... Doesn't make sense to me!
An ad for "Dedicated Root Server" - What is a dedicated root server first of all? Root servers of any infrastructure sound pretty important.
But, the ad also boasts "High speed performance with the new Intel Core i9-9900K octa-core processor", that's the first weird thing.
Why would anyone responsible enough want to put an i9 into a highly-reliable root server, when the thing doesn't even support ECC? Also, come on, octa-core isn't much, I deal with servers that have anywhere between 2 and 24 cores. 8 isn't exactly a win, even if it has a higher per-core clock.
Oh, also, further down the ad has a list of, seeming, advantages/specs of the servers, they proclaim that the CPU "incl. Hyper-Threading-Technology"... Isn't that... Standard when it comes to servers? I have never seen a server without hyperthreading so far at my job.
"64 GBs of DDR4 RAM" - Fair enough, 64 gigs is a good amount, but... Again, its not ECC, something I would never put into a server.
"2 x 8 TB SATA Enterprise Hard Drive 7200 rpm" - Heh, "enterprise hard drive", another cheap marketing word, would impress me more if they mentioned an actual brand/model, but I'll bite, and say that at least the 7200 rpm is better than I expected.
"100 GBs of Backup Space" - That's... Really, really little. I've dealt with clients who's single database backup is larger than that. Especially with 2x8 TB HDD (Even accounting for software raids on top)
This one cracks me up - "Traffic unlimited"
Whaaaat?! You are not gonna give me a limit to the total transferred traffic to the internet for my server in your data center? Oh, how generous of you, only, the other case would make the server just an expensive paperweight! I thought this ad was for semi-professionals at least, so why mention traffic, and not bandwidth, the thing that matters much more when it comes to servers? How big of a bandwidth do I get? Don't tell me you use dialup for your "Dedicated Root Server"s!
"Location Germany or Finland" - Fair enough, geolocation can matter when it comes to latency.
"No minimum contract" - Oooh, how kiiiind of you, again, you are not gonna charge me extra for using the server only as long as I pay? How nice!
"Setup Fee £60" - I guess, fair enough, the server is not gonna set itself up, only...
The whole ad is for "monthly from £55.50", that's quite the large fee for setup.
Oh, and a cherry on top, the tiny print on the bottom mentions: "All prices exclude VAT and are a subject to..." blah blah blah.
Really? I thought that this sort of almost customer deceipt is present only in the common people's sphere!
I must say, there's being unimpressed, and then... There's this. Why, just... Why? Anyone understands this? Because I don't...12 -
Is your code green?
I've been thinking a lot about this for the past year. There was recently an article on this on slashdot.
I like optimising things to a reasonable degree and avoid bloat. What are some signs of code that isn't green?
* Use of technology that says its fast without real expert review and measurement. Lots of tech out their claims to be fast but actually isn't or is doing so by saturation resources while being inefficient.
* It uses caching. Many might find that counter intuitive. In technology it is surprisingly common to see people scale or cache rather than directly fixing the thing that's watt expensive which is compounded when the cache has weak coverage.
* It uses scaling. Originally scaling was a last resort. The reason is simple, it introduces excessive complexity. Today it's common to see people scale things rather than make them efficient. You end up needing ten instances when a bit of skill could bring you down to one which could scale as well but likely wont need to.
* It uses a non-trivial framework. Frameworks are rarely fast. Most will fall in the range of ten to a thousand times slower in terms of CPU usage. Memory bloat may also force the need for more instances. Frameworks written on already slow high level languages may be especially bad.
* Lacks optimisations for obvious bottlenecks.
* It runs slowly.
* It lacks even basic resource usage measurement.
Unfortunately smells are not enough on their own but are a start. Real measurement and expert review is always the only way to get an idea of if your code is reasonably green.
I find it not uncommon to see things require tens to hundreds to thousands of resources than needed if not more.
In terms of cycles that can be the difference between needing a single core and a thousand cores.
This is common in the industry but it's not because people didn't write everything in assembly. It's usually leaning toward the extreme opposite.
Optimisations are often easy and don't require writing code in binary. In fact the resulting code is often simpler. Excess complexity and inefficient code tend to go hand in hand. Sometimes a code cleaning service is all you need to enhance your green.
I once rewrote a data parsing library that had to parse a hundred MB and was a performance hotspot into C from an interpreted language. I measured it and the results were good. It had been optimised as much as possible in the interpreted version but way still 50 times faster minimum in C.
I recently stumbled upon someone's attempt to do the same and I was able to optimise the interpreted version in five minutes to be twice as fast as the C++ version.
I see opportunity to optimise everywhere in software. A billion KG CO2 could be saved easy if a few green code shops popped up. It's also often a net win. Faster software, lower costs, lower management burden... I'm thinking of starting a consultancy.
The problem is after witnessing the likes of Greta Thunberg then if that's what the next generation has in store then as far as I'm concerned the world can fucking burn and her generation along with it.6 -
In last episode of "How SystemD screwed me over", we talked about Systemd's PrivateTMP and how it stopped me from generating SSL certificates.
In today's episode - SystemD vs CGroups!
Mister Pottering and his team apparently felt that CGroups are underused (As they can be quite difficult to set up), and so decided to integrate them into SystemD by default. As well as to provide a friendlier interface to control their values.
One can read about these interactions in the manual page "systemd.resource-control"
All is cool so far. So what happened to me today?
Imagine you did a major system release upgrade of a production server, previously tested on a standalone server. This upgrade doesn't only upgrade the distribution however, it also includes the switch from SysVInit to SystemD. Still, everything went smooth before, nothing to worry now then, right? Wrong.
The test server was never properly stress-tested. This would prove to be an issue.
When the upgrade finishes, it is 4 AM. I am happy to go to bed at last. At 6 AM, however, I am woken up again as the server's webservices are unavailable, and the machine is under 100% CPU load. Weird, I check htop and see that Apache now eats up all 32 virtual cores. So I restart it, casting it off to some weird bug or something as the load returns to normal.
2 hours later, however, the same situation occurs. This time, I scour all the logs I can, and find something weird - Many mentions that Apache couldn't create a worker thread? That's weird.
Several hours of research and tinkering later, I found out the following:
1 - By default, all processes of a system that runs SystemD are part of several CGroups. One of these CGroups is the PID CGroup, meant to stop a runaway process from exhausting all PIDs/TIDs of a system.
This limit is, by default, set to a certain amount of the total available PIDs. If a process exhausts this limit, it can no longer perform operations like fork().
So now, I know the how and why, but how should I solve this? The sanest option would be to get a rough estimate of just how many threads the Apache webserver might need. This option, though, is harder, than apparent. I cannot just take the MaxRequestsWorkers number... The instance has roughly double the amount of threads already. The cause being, as I found out, the HTTP/2 module, which spawns additional threads that do not count towards this limit. So I have no idea what limit to set.
Or I could... Disable the limit for just the webserver via the TasksAccounting switch. I thought this would work. And it did seem to... Until I ran out of TIDs again - Although systemctl status apache2.service no longer reported the number of tasks or a task limit of the process, the PID CGroup stayed set to the previous limit. Later I found out that I can only really disable the Task Accounting for all the units of a given slice and its parents.
This, though, systemctl somewhat didn't make apparent (And I skimmed the manual, that part was my fault)
So... The only remaining option I had was to... Just set the limit to infinite. And that worked, at last.
It took me several hours to debug this issue. And I once again feel like uninstalling systemd again, in favor of sysvinit.
What did I learn? RTFM, carefully, everything is important, it is not enough to read *half* the paragraph of a given configuration option...
Oh, and apache + http/2 = huge TID sink. -
Am I missing something with VMs? Every time I set one up (either linux guest on win10 host or the reverse), what I get is so slow and laggy that I can barely use it. I always give my VMs about 70% of my ram and at least 4 cpu cores, but still, it ends up being as slow as my grandma's 10 year old laptop.
On the other hand, I've seen people on youtube editing fucking videos in their VMs and the performance was so good that you couldn't even tell if its a vm or not.
This happens both on my desktop (8th gen i7, 16GB ram and ssd) and on my laptop (same cpu, 8GB ram, nvme).
Is virtualbox not good enough, are my devices not powerful enough or am I just too stupid?17 -
Fucking hate to explain basic shit to computer illiterate. Usually I don't mind, but right know I working on the project, want to automate one thing I need to do every morning, put two numbers to web page(I will explain details maybe in next rant). So I am only one who fix, buys computers, printer(for some problems I call for other repair man.). Generally speaking working as IT guy. Firm has like 50 computers, some of them has SCADA software. Some computers have Win 7, some win 8 and others win 10, can't upgrade those computers, not enough money(I can deal with this problem). And yes, computer buying is not the fastest, easiest thing too. Because is public firm, I need to do public buying(I don't know how to translate to english), and most of the time wins the lowest price, I am ok with that. But I can't on item specification write I want that model pc or it components. Example: I can't write I want intel processor, however I can write number of cores, frequency. But it's not that bad, usually i have template for all things I buy. One of the worst thing is this, our firm bought new bookkeeping software version, old version was using visual foxpro framework. Good thing I didn't initiate the purchase, because right know I would be jobless, not because I would be fired, but because our senior accountant would drive me crazy. In fact accountants drive me crazy, but I can handle it for now. As I wrote before our form has about 120 workers, major part of workers are old, like my parents age. (I am 28 btw. Mom is 55.). As you all know what happens if you say you work with computers. So our accountants are like 60 years old, got new program, don't know how to work with it, and they ask me how to do certain things. if I don't know how to I ask program's support, every question is like 90 Eur. So in short accountants expect I should know their work and how program works. If I try say something they don't like, they try to make my day hard. Next thing is our billing program. Man that worked before me done some payments import. And when I came everyone expect me to do that. Ok I did that because that people working with billing program would probably fuck it up. And I semi automated that, so I don't mind that much. Sometimes that program fucks up, like it happened yesterday, it send email invoices attachment without filename. Example: people got this attachment ".pdf"(no filename, only extension), And if you save it you need do OPEN WITH command and then select pdf reader or rename file (I don't know what easier). And surprise surprise our firm, customer support redirects all phone calls, emails to me. But I did explain to customer support what to say to people. Still they redirect it to me.
PS: This is my first job after school. I work as part time.
TL;DR Thinking my life, carrier choices. accountants are not the nicest people.8