Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "split screen"
-
I’m a senior dev at a small company that does some consulting. This past October, some really heavy personal situation came up and my job suffered for it. I raised the flag and was very open with my boss about it and both him and my team of 3 understood and were pretty cool with me taking on a smaller load of work while I moved on with some stuff in my life. For a week.
Right after that, I got sent to a client. “One month only, we just want some presence there since it’s such a big client” alright, I guess I can do that. “You’ll be in charge of a team of a few people and help them technically.” Sounds good, I like leading!
So I get here. Let’s talk technical first: from being in a small but interesting project using Xamarin, I’m now looking at Visual Basic code, using Visual Studio 2010. Windows fucking Forms.
The project was made by a single dev for this huge company. She did what she could but as the requirements grew this thing became a behemoth of spaghetti code and User Controls. The other two guys working on the project have been here for a few months and they have very basic experience at the job anyways. The woman that worked on the project for 5 years is now leaving because she can’t take it anymore.
And that’s not the worse of it. It took from October to December for me to get a machine. I literally spent two months reading on my cellphone and just going over my shitty personal situation for 8 hours a day. I complained to everyone I could and nothing really worked.
Then I got a PC! But wait… no domain user. Queue an extra month in which I could see the Windows 7 (yep) log in screen and nothing else. Then, finally! A domain user! I can log in! Just wait 2 extra weeks for us to give your user access to the subversion rep and you’re good to go!
While all of this went on, I didn’t get an access card until a week ago. Every day I had to walk to the reception desk, show my ID and request they call my boss so he could grant me access. 5 months of this, both at the start of the day and after lunch. There was one day in particular, between two holidays, in which no one that could grant me access was at the office. I literally stood there until 11am in which I called my company and told them I was going home.
Now I’ve been actually working for a while, mostly fixing stuff that works like crap and trying to implement functions that should have been finished but aren’t even started. Did I mention this App is in production and being used by the people here? Because it is. Imagine if you will the amount of problems that an application that’s connecting to the production DB can create when it doesn’t even validate if the field should receive numeric values only. Did I mention the DB itself is also a complete mess? Because it is. There’s an “INDEXES” tables in which, I shit you not, the IDs of every other table is stored. There are no Identity fields anywhere, and instead every insert has to go to this INDEXES table, check the last ID of the table we’re working on, then create a new registry in order to give you your new ID. It’s insane.
And, to boot, the new order from above is: We want to split this app in two. You guys will stick with the maintenance of half of it, some other dudes with the other. Still both targeting the same DB and using the same starting point, but each only working on the module that we want them to work in. PostmodernJerk, it’s your job now to prepare the app so that this can work. How? We dunno. Why? Fuck if we care. Kill you? You don’t deserve the swift release of death.
Also I’m starting to get a bit tired of comments that go ‘THIS DOESN’T WORK and ‘I DON’T KNOW WHY WE DO THIS BUT IT HELPS and my personal favorite ‘??????????????????????14 -
A message to all Android developers:
MAKE YOUR APP SUPPORT SPLIT SCREEN.
Sincerely, a pissed off multitasker.12 -
WASM was a mistake. I just wanted to learn C++ and have fast code on the web. Everyone praised it. No one mentioned that it would double or quadruple my development time. That it would cause me to curse repeatedly at the screen until I wanted to harm myself.
The problem was never C++, which was a respectable if long-winded language. No no no. The problem was the lack of support for 'objects' or 'arrays' as parameters or return types. Anything of any complexity lives on one giant Float32Array which must surely bring a look of disgust from every programmer on this muddy rock. That is, one single array variable that you re-use for EVERYTHING.
Have a color? Throw it on the array. 10 floats in an object? Push it on the array - and split off the two bools via dependency injection (why do I have 3-4 line function parameter lists?!). Have an image with 1,000,000 floats? Drop it in the array. Want to return an array? Provide a malloc ptr into the code and write to it, then read from that location in JS after running the function, modifying the array as a side effect.
My- hahaha, my web worker has two images it's working with, calculations for all the planets, sun and moon in the solar system, and bunch of other calculations I wanted offloaded from the main thread... they all live in ONE GIANT ARRAY. LMFAO.If I want to find an element? I have to know exactly where to look or else, good luck finding it among the millions of numbers on that thing.
And of course, if you work with these, you put them in loops. Then you can have the joys of off-by-one errors that not only result in bad results in the returned array, but inexplicable errors in which code you haven't even touched suddenly has bad values. I've had entire functions suddenly explode with random errors because I accidentally overwrote the wrong section of that float array. Not like, the variable the function was using was wrong. No. WASM acted like the function didn't even exist and it didn't know why. Because, somehow, the function ALSO lived on that Float32Array.
And because you're using WASM to be fast, you're typically trying to overwrite things that do O(N) operations or more. NO ONE is going to use this return a + b. One off functions just aren't worth programming in WASM. Worst of all, debugging this is often a matter of writing print and console.log statements everywhere, to try and 'eat' the whole array at once to find out what portion got corrupted or is broke. Or comment out your code line by line to see what in forsaken 9 circles of coding hell caused your problem. It's like debugging blind in a strange and overgrown forest of code that you don't even recognize because most of it is there to satisfy the needs of WASM.
And because it takes so long to debug, it takes a massively long time to create things, and by the time you're done, the dependent package you're building for has 'moved on' and find you suddenly need to update a bunch of crap when you're not even finished. All of this, purely because of a horribly designed technology.
And do they have sympathy for you for forcing you to update all this stuff? No. They don't owe you sympathy, and god forbid they give you any. You are a developer and so it is your duty to suffer - for some kind of karma.
I wanted to love WASM, but screw that thing, it's horrible errors and most of all, the WASM heap32.7 -
My 1366x768 laptop resolution is not enough to even have intellij open along with firefox in split screen mode... :(5
-
Planned to watch YouTube and read devRant.
well apparently not because the devRant Android app doesn't support split screen.3 -
So I log into a great new site with my development machine. 64G of RAM, and 2 hex core CPU's; GTX 1070 video, SSD, etc. 4K display screen. (Motherboard is 5 years old, not trying to brag, just give context). I regularly put 8 pages of text on the screen side by side. Split ergonomic keyboard.
It wants me to load a mobile app for "full access".
Yea, why look at the world with wide open eyes when you can view everything through a cardboard toilet paper tube and type with your thumbs???
== John == -
Now that the Phone has a custom rom with root, with only a little issue with some split screen nonsense I'm finally ready to use my phone like a normal ph- OH MY GOD WHAT IS THIS? WHERE THE HELL ARE MY BLOBS? WHAT THE FUUCK!?
Good thing that I rooted with Magisk and I could flash the blobs https://forum.xda-developers.com/ap...9 -
"Pokemon Let's Go" review:
I knew it would be a very easy game, made to transition Pokemon Go players to the core series of games, but this game is just poorly thought out. The multiplayer was obviously an afterthought; there is no split-screen. When the other player goes off-screen, they are lost off camera. Player 2 cannot interact with anything: they cannot talk to people, collect items, or initiate battles (They walk right through Pokemon)
The game is too easy by design. You cannot fight wild Pokemon, so you end up having 6 Pokemon by the beginning of the game all at full health (And everything gets XP when you catch something, so most of your Pokemon will be up to level 6-10 by your first battles) and the opposition will only have one level 3-4 Pokemon.
This trend continues throughout the game.
The map is tiny. You could walk the whole thing in an hour. Even Gameboy Pokemon maps were larger.
I knew this going into it, but it only has gen 1, which means pretty much no Pokemon, and they're the ones that I'm bored of. Every shitty game starts with generation 1 pokemon then ever introduces anything else. I'm sick of pidgeys!
Plus the hefty price tag of $60 just makes this game not worth much, despite the hype they tried to give it. That's probably why they were to secretive about the gameplay before launch: they knew it was bad,6 -
The downside of cheating is it removes the stopping point since there are no barriers.
The upside is you realize how badly these developers want your money and the amount of time it would take to finish the game if you weren't cheating... And you'd probably rage quit first... which I can't do until I realize how much time I've wasted.
That usually happens when I finally the game or am greeted with an Under Construction screen...
It now takes a total of 20+ stars to build each object but they split it up so I imagine if you were really playing each time you'd go: what?!!!! Wtf....7 -
So recently I had an argument with gamers on memory required in a graphics card. The guy suggested 8GB model of.. idk I forgot the model of GPU already, some Nvidia crap.
I argued on that, well why does memory size matter so much? I know that it takes bandwidth to generate and store a frame, and I know how much size and bandwidth that is. It's a fairly simple calculation - you take your horizontal and vertical resolution (e.g. 2560x1080 which I'll go with for the rest of the rant) times the amount of subpixels (so red, green and blue) times the amount of bit depth (i.e. the amount of values you can set the subpixel/color brightness to, usually 8 bits i.e. 0-255).
The calculation would thus look like this.
2560*1080*3*8 = the resulting size in bits. You can omit the last 8 to get the size in bytes, but only for an 8-bit display.
The resulting number you get is exactly 8100 KiB or roughly 8MB to store a frame. There is no more to storing a frame than that. Your GPU renders the frame (might need some memory for that but not 1000x the amount of the frame itself, that's ridiculous), stores it into a memory area known as a framebuffer, for the display to eventually actually take it to put it on the screen.
Assuming that the refresh rate for the display is 60Hz, and that you didn't overbuild your graphics card to display a bazillion lost frames for that, you need to display 60 frames a second at 8MB each. Now that is significant. You need 8x60MB/s for that, which is 480MB/s. For higher framerate (that's hopefully coupled with a display capable of driving that) you need higher bandwidth, and for higher resolution and/or higher bit depth, you'd need more memory to fit your frame. But it's not a lot, certainly not 8GB of video memory.
Question time for gamers: suppose you run your fancy game from an iGPU in a laptop or whatever, with 8GB of memory in that system you're resorting to running off the filthy iGPU from. Are you actually using all that shared general-purpose RAM for frames and "there's more to it" juicy game data? Where does the rest of the operating system's memory fit in such a case? Ahhh.. yeah it doesn't. The iGPU magically doesn't use all that 8GB memory you've just told me that the dGPU totally needs.
I compared it to displaying regular frames, yes. After all that's what a game mostly is, a lot of potentially rapidly changing frames. I took the entire bandwidth and size of any unique frame into account, whereas the display of regular system tasks *could* potentially get away with less, since most of the frame is unchanging most of the time. I did not make that assumption. And rapidly changing frames is also why the bitrate on e.g. screen recordings matters so much. Lower bitrate means that you will be compromising quality in rapidly changing scenes. I've been bit by that before. For those cases it's better to have a huge source file recorded at a bitrate that allows for all these rapidly changing frames, then reduce the final size in post-processing.
I've even proven that driving a 2560x1080 display doesn't take oodles of memory because I actually set the timings for such a display in order for a Raspberry Pi to be able to drive it at that resolution. Conveniently the memory split for the overall system and the GPU respectively is also tunable, and the total shared memory is a relatively meager 1GB. I used to set it at 256MB because just like the aforementioned gamers, I thought that a display would require that much memory. After running into issues that were driver-related (seems like the VideoCore driver in Raspbian buster is kinda fuckulated atm, while it works fine in stretch) I ended up tweaking that a bit, to see what ended up working. 64MB memory to drive a 2560x1080 display? You got it! Because a single frame is only 8MB in size, and 64MB of video memory can easily fit that and a few spares just in case.
I must've sucked all that data out of my ass though, I've only seen people build GPU's out of discrete components and went down to the realms of manually setting display timings.
Interesting build log / documentary style video on building a GPU on your own: https://youtube.com/watch/...
Have fun!20 -
I HATE SURFACES SO FRICKING MUCH. OK, sure they're decent when they work. But the problem is that half the time our Surfaces here DON'T work. From not connecting to the network, to only one external screen working when docked, to shutting down due to overheating because Microsoft didn't put fans in them, to the battery getting too hot and bulging.... So. Many. Problems. It finally culminated this past weekend when I had to set up a Laptop 3. It already had a local AD profile set up, so I needed to reset it and let it autoprovision. Should be easy. Generally a half-hour or so job. I perform the reset, and it begins reinstalling Windows. Halfway through, it BSOD's with a NO_BOOT_MEDIA error. Great, now it's stuck in a boot loop. Tried several things to fix it. Nothing worked. Oh well, I may as well just do a clean install of Windows. I plug a flash drive into my PC, download the Media Creation Tool, and try to create an image. It goes through the lengthy process of downloading Windows, then begins creating the media. At 68% it just errors out with no explanation. Hmm. Strange. I try again. Same issue. Well, it's 5:15 on a Friday evening. I'm not staying at work. But the user needs this laptop Monday morning. Fine, I'll take it home and work on it over the weekend. At home, I use my personal PC to create a bootable USB drive. No hitches this time. I plug it into the laptop and boot from it. However, once I hit the Windows installation screen the keyboard stops working. The trackpad doesn't work. The touchscreen doesn't work. Weird, none of the other Surfaces had this issue. Fine, I'll use an external keyboard. Except Microsoft is brilliant and only put one USB-A port on the machine. BRILLIANT. Fortunately I have a USB hub so I plug that in. Now I can use a USB keyboard to proceed through Windows installation. However, when I get to the network connection stage no wireless networks come up. At this point I'm beginning to realize that the drivers which work fine when navigating the UEFI somehow don't work during Windows installation. Oh well. I proceed through setup and then install the drivers. But of course the machine hasn't autoprovisioned because it had no internet connection during setup. OK fine, I decide to reset it again. Surely that BSOD was just a fluke. Nope. Happens again. I again proceed through Windows installation and install the drivers. I decide to try a fresh installation *without* resetting first, thinking maybe whatever bug is causing the BSOD is also deleting the drivers. No dice. OK, I go Googling. Turns out this is a common issue. The Laptop 3 uses wonky drivers and the generic Windows installation drivers won't work right. This is ridiculous. Windows is made by Microsoft. Surface is made by Microsoft. And I'm supposed to believe that I can't even install Windows on the machine properly? Oh well, I'll try it. Apparently I need to extract the Laptop 3 drivers, convert the ESD install file to a WIM file, inject the drivers, then split the WIM file since it's now too big to fit on a FAT32 drive. I honestly didn't even expect this to work, but it did. I ran into quite a few more problems with autoprovisioning which required two more reinstallations, but I won't go into detail on that. All in all, I totaled up 9 hours on that laptop over the weekend. Suffice to say our organization is now looking very hard at DELL for our next machines.4
-
Wow... so i split my ssd into 2 partitions, one for Windows and one for ubuntu. After booting into ubuntu multiple times and it works fine, i boot into windows for a while. Next boot i get met with the grub rescue screen... apparently when i booted into windows it deleted my ubuntu partition and allocated the space back to its partition. 😐3
-
So we started a new Unity video game project for mobile in June 2021. Hooray!
Being a mobile project, one of the earliest things we think about is scaling the interface across all sorts of device screen resolutions and aspect ratios, right? Well, to preemptively solve this problem early on, I decided to letterbox the game view - just choose one aspect ratio for the game and pad black bars to the sides of the screen. Simple, solves the game's world space problem without trying too hard, and it automatically adapts to Android's split-screen mode.
I showed the early builds to management as well as game design team and they gave me some general nods. Sounds like green light ahead. I spent the next few months building the game logic and scale the UI around a consistent letterboxed game view. If you had experience scaling Unity UI to a letterboxed area, you should already knew that it takes a whole paradigm of its own that's kinda hard to break out of, but the fact that it stays consistent across all screen aspect ratios is so worth it. Regardless, the biggeer benefit of letterboxing is simpler world space setup. You don't worry about whether this particular area will be overflowed horizontally or vertically in a particular device or not. You have a 9:16 window to view the world through, nothing needs to move at runtime and that's about it.
Fast-forward to early September 2021 and 40+ builds later, the GD started having concern that the playing area is not filling up his phone screen and that the letterboxes are bothering him. He wants to get rid of the letterboxes and wants the game world as well as UI to fill up his screen.
Yes. After 40+ builds, for all of which the letterbox was present, nobody in the project raised a concern about the letterbox. It's only NOW that they all of the sudden side with the GD and demand the removal of the letterbox. I feel like almost half of my effort on this game has been wasted. These clueless guys didn't spend one second looking at the early builds thinking of the possibility that the black bars at the top and bottom of their phone screens (which I repeat: has been around since the very first build) is gonna bother them? Somebody must be playing a cruel joke at this company. They had all the chances to bring this up as a potential issue and TODAY is the first time I hear of it.
See, designers. You waste our time and your time by doing this kind of thing. Please raise your issues early. Complain to us ASAP. If you wait for so long before raising an issue that has been in-your-face the whole time, I can't fault any developer for assuming you're trying to play a long prank. I can tell designers right now: it's not funny.1 -
Just did my interview with Turing & OMG!
2 questions, total of 30 mins to answer both questions, and there's a dude with access to your screen, camera & microphone watching your every move.
Went horribly. Utter failure. Not expecting to hear back from them.
Questions weren't related to the skills I said I had. They were general questions that could be answered in any language. I honestly wasn't ready to write code to split an array of numbers into 3 equal parts whose values when added would equal.
FML. Fuck this shit. I'm tired of all the bullshit (mine included)!12 -
I found the best text editor for basic code fixing
For a couple of days, I was looking for a simple terminal-based text editor for taking simple code notes or basic code fixing kinds of stuff.
As an aspiring developer, I really like the concept of coding without touching the mouse.
So I downloaded the king of CLI text editors, Vim.
Now, guess what happened.
Yeah, you're right. I stuck inside vim and couldn't even quit from there.
Then, I started watching a bunch of tutorials and started reading vim's documentation.
But then I realized, I have to learn a lot of things only to operate vim and it's a pretty lengthy process.
At that time, I really needed a very simple text editor for doing basic stuff.
But, vim is not simple... you know :)
So, I had to come back to 'nano' & I was not happy enough to write codes by using 'nano'.
Suddenly, I discovered another really cool text editor called 'micro'.
It's really awesome.
It's not as advanced as vim but definitely a lot better than nano.
Micro is an open-source command-line text editor created by Zachary Yedidia.
Some basic key points of Micro:
1. It's really easy to operate.
2. It has different colours and highlights.
3. It supports syntaxes for over 70+ programming languages.
4. It has mouse support.
5. Plugins & colour schemes.
The best thing for me is colour schemes & screen split support.
Check out my full article on DEV - @souviktests.20 -
I have a rant. A genuine rant, not a funny story, etc.
I want a keyboard. I need one. It can cost €500, as long as it won't break in a year and fulfils all my needs. Make it a €1000, I don't care. What are my needs then? Well...
It has to be a split keyboard - two halves. But wireless in every aspect, ergonomic, with multimedia keys on its outer edges (preferably pointing outwards, not up) and a heavy metal trackball on the right outer edge (preferably upper right corner). That's a bare minimum.
On top of that it probably some magnetic scrolls for things like navigating pages, changing volume and fidgeting in general wouldn't hurt. Also I'd prefer it to snap back into a one-piece whenever I need it to lie on my knees, e.g. when I type while sitting on a coach (I have a coach PC setup, no desk, and there's a reason). Why do I need it to split then...?
I had an accident. Kind of broke my back when I was 11. It's mostly okay now after couple years of rehabilitation and many more years of careful living. Luckily the only two wheels I ride on are powered by a 105.97 hp @ 9,970 rpm engine. Still, I try to be careful so I tried tons of work hygiene techniques over the years and I found out anything over 2 hours is best done while lying flat.
Coding while lying flat has its challenges, mostly focused around screen and input. Ever since I got a VR headset half of them got solved but the other half - acquiring a suitable keyboard - it's very hard to satisfy. I tried that with a one-piece keyboard lying on my stomach. Turns out actively bending elbows quickly wears them out (hello tennis players). So a split keyboard it has to be. So far I tried 4 different ones and I had to modify the cable connecting both halves in each and every one of them so that it'd be long enough to go behind my back. The main cable itself I only had to modify once because usually there're extensions available.
Apart from cables, all of those keyboards had issues. Starting from some kind of de-syncing when keys from both halves would randomly register in a wrong order - I didn't know it's possible with a cable connected halves... I did try two generic WiFi keyboards (using one for each hand) and they unfortunately suffered from that very same issue but I was sure it wouldn't happen if the device was designed to be a one unit from the very beginning, right? And yet it in 2 of the tested devices.
Other than that, plugs disconnecting on their own forcing me to take off the headset and fiddle around, too high key travel that'd strain the wrists after a few hours, even the noise that would wake up my girlfriend sleeping in a separate room were all a common issue (I briefly had an almost completely silent WiFi mechanical keyboard from Logitech we both really liked, but it was a one-piece). Once I got a split keyboard that was "natively" WiFi but not only the two halves were still connected with a cable that turned out to be way too short for my needs, it also had a very noticeable lag despite the high price - a lag way higher than any of the cheap WiFi keyboards I owned in the past. So I sent it back. Now IDK what to do because AFAICT there are no more models available, at least where I live.
So yeah, I need a keyboard and I'll probably have to make one myself. Sorry, just had to vent.5 -
Since gitkraken is turning into such a bitch, I've searched for alternatives once again, as usual none of the competitors still implemented a fraction of it, after so much time.
Sublime Merge looked promising, but then half the time fucks the history graph, fails to remove remotes and more funny stuff I don't want to mess with.
Github Desktop I didn't even try because it didn't seem to have any proper history graph to begin with.
For now ended up on sourcetree, though I really do miss having commit message and description be two separate inputs, have done the most basic merge for now, so it's a to be continued experience.
Mostly afraid of how it'll show merge conflicts and commit view, as from what I gathered it doesn't fullscreen when you click a commit, but instead shows an awkward small screen at the bottom of the graph split further in half with the avatar and commit message.
Edit: oh for fucks sake, just noticed it doesn't even have linux support, god damn it.24 -
TIL: new M2 MacBooks officially support only single external screen. Not even the Pro class supports dual. It supports single 6K monitor, but I've failed to find any userfriendly ways to get good tiling which would be equivalent to multi-monitor. The only native tilling is left/right split. TB3 can handle 3x2K or 2x4K, but Apple said "fuck you and your multi-monitor setup".
I ain't mad tho. The guy upgrading to M2 sold me his dual monitors for a really good price.9 -
after exploring a lot of ui frameworks and architectures, i am trying to go back to android dev but again with the curiosity for the one single question that i had at the start of my career 5 years back : why is it's ui so complex?
can anyone help me understand it?
like comparing with the most basic ui framework : html/css/js, why android is so different? we got activities, fragments and views. the worst thing in android is lifecycles, that each of these ui components have.
The view lifecycle is simple to get over with : whatever is the lifecycle of its parent, is the lifecycle of view.
a view's parent is another view, whose parent is another view, whose parent is... and so on until we reach the root view which is stored by either a fragment or activity
therefore a view's lifecycle = lifecycle of activity or fragment
till here its very clear. the fuckup is simply in the next part:
WTAF is activity ?WTAF is fragment? why are their various functions called in the sequence they are called? oncreate, on start, onstartview, ondestroy... why?
activity is still somewhat okay, but fragment is completey weird af : it can be a part of activity: basically it can cover your complete screen and behave as an activity itself (so you don't get to say that activity === screen and fragment === view) AND IT HAS ITS OWN FUCKING LIFECYCLES! So does that mean fragment's fucntions cna also be called by OS?
what's more mind fucking, is the fact that android activity can destroy/pause or recreate fragments on its own, by some "views" like viewpager , or even hold multiple fragments as "alive" at the same time, using something called a "backstack" ??!??!
and each of these fragments in the stack can be called by system at any time? like wtf???
all these stuff is super confusing and i haven't even scratched the surface. the newer , more complicated stuff like viewmodel, livedata and again "lifecycles" has a complete seperate behavior and functionality of their own. plus the various "reality-check" scenarios like: when a user is streaming a video in picture-in-picture mode while keeping your app in split screen with maps in the second split, when a call comes and the video keeps running, and user rotates the device, let me know the clusterfuck situation for the 3rd fragment in your 5 icon navigation view currently at the payment page with 2 fragments and 1 activity in backstack!!!
god bless thy soul for this shitty framework isn't going anywhere , rather its super strong and getting more clusterfucked with new beautiful shit everyday.
(if someone can ignore my gentle language, i would really like to know/get redirected to some resources where i can learn more on this)3 -
I've always wanted to do something in IT Support, but I didn't know where to start. I've been helping my co-workers optimize their system and even helped retrieve photos from a tablet that had a broken screen; her service plan said along the lines of "if they weren't there they were lost," I was able to retrieve them in a matter of hours (Really guys! I'm shocked! It was just a broken touchscreen, the storage was just fine. I think I'll remember this moment).
And because my growing impopularity, I started a new business called The Webnician. The company is split into two sections, the Technician, and the Web Developer. Hence, The Web(Tech)nician. I am proud of my name choice.
Then I wanted to become a certified technician, so I did some research on how to become one and found out I need to take the CompTIA A+ 220-901 and 220-902 exam and... I couldn't be more excited!
I've always loved computers, and maybe my late father had some say into it. Nevertheless, I am excited to begin my journey, even though it took awhile to find where I needed to go. I hope you all can follow me on my journey and support my new business.
I don't have anything else to say, so I'll just leave here.1 -
To the reactjs-centered fucks who develop the popular web component viewing software called storybook: have you ever heard about semver?
89 alpha/beta/rc releases for a minor update 6.3 -> 6.4 with "100's of fixes and enhancements" "in preparation of the HUGE 7.0 release". Gee I wonder will it have 1000's of bugfixes? How bug-ridden is this software?
Every minor upgrade since 5.x is backwards-incompatible and requires a day of frustration finding out in how many more fucking NPM packages you split your codebase just because it's cool. I know move fast and break things, but some of us have other things to do than resolving node_modules incompatibilities you know. "No just hit 'npx sb upgrade' you say". I did, I really did! And the browser showed a blank screen of death with tons of cryptic React errors, it really did! Thank God you abstracted away all your dependencies in that sb command, now you can't even read the docs about what could have gone wrong with a specific sub-package. You have @storybook/html but the docs redirect to React pages, so good luck if you use something else
This is so sad... like.. the IDEA of storybook is great. But why did faith put the capacity to develop such a tool into the hands of people who think the world centers around React and JSX.. HTML should have been the default, and then you build on top of that for your fav framework, not the other way around -
For those wanting split screen to work on android. Go to settings -> developer options -> "Force activities to be re-sizable"
And just like that. It works. (Only problem I've seen is the + button to post isn't there6 -
Very Long, random and pretentiously philosphical, beware:
Imagine you have an all-powerful computer, a lot of spare time and infinite curiosity.
You decide to develop an evolutionary simulation, out of pure interest and to see where things will go. You start writing your foundation, basic rules for your own "universe" which each and every thing of this simulation has to obey. You implement all kinds of object, with different attributes and behaviour, but without any clear goal. To make things more interesting you give this newly created world a spoonful of coincidence, which can randomely alter objects at any given time, at least to some degree. To speed things up you tell some of these objects to form bonds and define an end goal for these bonds:
Make as many copies of yourself as possible.
Unlike the normal objects, these bonds now have purpose and can actively use and alter their enviroment. Since these bonds can change randomely, their variety is kept high enough to not end in a single type multiplying endlessly. After setting up all these rules, you hit run, sit back in your comfy chair and watch.
You see your creation struggle, a lot of the formed bonds die and desintegrate into their individual parts. Others seem to do fine. They adapt to the rules imposed on them by your universe, they consume the inanimate objects around them, as well as the leftovers of bonds which didn't make it. They grow, split and create dublicates of themselves. Content, you watch your simulation develop. Everything seems stable for now, your newly created life won't collapse anytime soon, so you speed up the time and get yourself a cup of coffee.
A few minutes later you check back in and are happy with the results. The bonds are thriving, much more active than before and some of them even joined together, creating even larger bonds. These new bonds, let's just call them animals (because that's obviously where we're going), consist of multiple different types of bonds, sometimes even dozens, which work together, help each other and seem to grow as a whole. Intrigued what will happen in the future, you speed the simulation up again and binge-watch the entire Lord of the Rings trilogy.
Nine hours passed and your world became a truly mesmerizing place. The animals grew to an insane size, consisting of millions and billions of bonds, their original makeup became opaque and confusing. Apparently the rules you set up for this universe encourage working together more than fighting each other, although fights between animals do happen.
The initial tools you created to observe this world are no longer sufficiant to study the inner workings of these animals. They have become a blackbox to you, but that's not a problem; One of the species has caught your attention. They behave unlike any other animal. While most of the species adapt their behaviour to fit their enviroment, or travel to another enviroment which fits their behaviour, these special animals started to alter the existing enviroment to help their survival. They even began to use other animals in such a way that benefits themselves, which was different from the usual bonds, since this newly created symbiosis was not permanent. You watch these strange, yet fascinating animals develop, without even changing the general composition of their bonds, and are amazed at the complexity of the changes they made to their enviroment and their behaviour towards each other.
As you observe them build unique structures to protect them from their enviroment and listen to their complex way of communication (at least compared to other animals in your simulation), you start to wonder:
This might be a pretty basic simulation, these "animals" are nothing more than a few blobs on a screen, obeying to their programming and sometimes getting lucky. All this complexity you created is actually nothing compared to a single insect in the real world, but at what point do you draw the line? At what point does a program become an organism?
At what point is it morally wrong to pull the plug?15 -
tell me guys what would you prefer:
function a(){
..
b(..)
..
b(..)
..
}
function b(p1,p2,p3,p4,p5,p6){.
...
}
or
function a(){
..
b(..)
..
b(..)
..
}
function b(
p1,
p2,
p3,
p4,
p5,
p6
){
...
}
if you read this rant before expanding, you got a complete context on how what function a is, its calling b 2 times and how function b looks.
if instead of the first option, i had used 2nd block, you wouldn't even know the 2nd param of b function without expanding this rant.
my point?
i prefer to keeping unnecessary info on one line. and w lot of linters disagree by splitting up the code. and most importantly , my arrogant tl disagree by saying he prefers the splitted code "for readability" and becaue "he likes code this way, old-eng1 likes this and old-eng2 likes this" .
why tf does an ide have horizontal a scrolling option available when you are too stupid to use it?
ok, i know some smartass is going to point that i too can use vertical scrolling, but hear me out: i am optimising this!
case 1 : a function with 7 params is NOT split into 7 lines. lets calculate the effort to remember it
- since all params could have similar charactersticks ( they will be of some type, might have defaults, might be a suspendable/async function etc), each param will take similar memory-efforts points. say 5sp each.
- total memory-efforts= 5sp *7 = 35 sp.
- say a human has 100 sp of fast memory storage, he can use the remaining 65 sp for loading say 5 small lines above or below.
- but since 5 lines above are already read and still visible on screen, they won't be needed to be loaded again nd again, nd we can just check the lines below.
- thus we are able to store 65+35+65 = 165 sp or about 11 lines of code in out fast memory for just a 100sp brain storage
case 2 function with 7 params IS split into 7 lines.
- in this case all lines are somewhat similar. 5sp for param lines as they are still similar which implies same 35sp for storing current function and params
- remaining 65sp can only be used to store next 5 lines of 13sp as the previous code is no longer visible.
- plus if you wanna refresh the code above, you gotta scroll, which will result in removing bottom code from screen , and now your 65sp from bottom code is overwritten by 65sp of top code.
- thus at a time, you are storing only 6 lines worth of code info. this makes you slow.
this is some imaginary math, but i believe it works10 -
It suuucks having to code split-screen on a 15" 1920x1080 laptop on a small desk that's only 2 feet wide. That's my home setup..
Code-cramp, I say. Time to upgrade sometime.. I need a new desk, for instance..2 -
Any suggestions on how do I extend my screen to 2 external monitors with 1 HDMI out?
Tried video streaming from USB C out to HDMI in but that isn't working.
My single HDMI port supports upto 4k output so we should be able split and run upto 4 1920p monitors.
Not sure which adaptors would work for this.10 -
That awkward moment when my webstorm ide thinks i have more than one displays and opens up in split screen.
-
To the UI/UXs... Which of these approaches is more Mobile User Friendly?
- A single screen with all 12 form fields visible to the user, where only four of these fields are optional and inputs are validated on submission.
----- OR -----
- A single screen with fields split into 12 sub screens, a form progress bar at the top, a next and previous button with "skip" button for optional fields, with inputs validated progressively.
You can imagine the contents of the form like the ones on surveys. I have already implemented the second option but in doubts of its friendliness, I also had previously implemented something similar to the first but with criticism from colleges stating it's too much fields in one screen.
I would love to see from your view and learn from your experience... What do you think?8