Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "model learning good"
-
> Root struggles with her ticket
> Boss struggles too
> Also: random thoughts about this job
I've been sick lately, and it's the kind of sick where I'm exhausted all day, every day (infuriatingly, except at night). While tired, I can't think, so I can't really work, but I'm during my probationary period at work, so I've still been doing my best -- which, honestly, is pretty shit right now.
My current project involves legal agreements, and changing agent authorization methods (written, telephone recording, or letting the user click a link). Each of these, and depending on the type of transaction, requires a different legal agreement. And the logic and structure surrounding these is intricate and confusing to follow. I've been struggling through this and the project's ever-expanding scope for weeks, and specifically the agreements logic for the past few days. I've felt embarrassed and guilty for making so little progress, and that (and a bunch of other things) are making me depressed.
Today, I finally gave up and asked my boss for help. We had an hour and a half call where we worked through it together (at 6pm...). Despite having written quite a bit of the code and tests, he was often saying things like "How is this not working? This doesn't make any sense." So I don't feel quite so bad now.
I knew the code was complex and sprawling and unintuitive, but seeing one of its authors struggling too was really cathartic.
On an unrelated note, I asked the most senior dev (a Macintosh Lisa dev) why everything was using strings instead of symbols (in Rails) since symbols are much faster. That got him looking into the benchmarks, and he found that symbols are about twice as fast (for his minimal test, anyway), and he suggested we switch to those. His word is gold; mine is ignorable. kind of annoying. but anyway, he further went into optimizing the lookup of a giant array of strings, and discovered bsearch. (it's a divide-and-conquer lookup). and here I am wondering why they didn't implement it that way to begin with. 🙄
I don't think I'm learning much here, except how to work with a "mature" codebase. To take a page from @Rutee07, I think "mature" here means the same as in porn: not something you ever want ot see or think about.
I mean, I'm learning other things, too, like how to delegate methods from one model to another, but I have yet to see why you would want to. Every use of it I've explored thus far has just complicated things, like delegating methods on a child of a 1:n relation to the parent. Which child? How does that work? No bloody clue! but it does, somehow, after I copy/pasted a bunch of esoteric legacy bs and fussed with it enough.
I feel like once I get a good grasp of the various payment wrappers, verification/anti-fraud integration, and per-business fraud rules I'll have learned most of what they can offer. Specifically those because I had written a baby version of them at a previous job (Hell), and was trying to architect exactly what this company already has built.
I like a few things about this company. I like my boss. I like the remote work. I like the code reviews. I like the pay. I like the office and some socializing twice a year.
But I don't like the codebase. at all. and I don't have any friends here. My boss is friendly, but he's not a friend. I feel like my last boss (both bosses) were, or could have been if I was more social. But here? I feel alone. I'm assigned work, and my boss is friendly when talking about work, but that's all he's there for. Out of the two female devs I work with, one basically just ignores me, and the other only ever talks about work in ways I can barely understand, and she's a little pushy, and just... really irritating. The "senior" devs (in quotes because they're honestly not amazing) just don't have time, which i understand. but at the same time... i don't have *anyone* to talk to. It really sucks.
I'm not happy here.
I miss my last job.
But the reason I left that one is because this job allows me to move and work remotely. I got a counter-offer from them exactly matching my current job, sans the code reviews. but we haven't moved yet. and if I leave and go back there without having moved, it'll look like i just abandoned them. and that's the last thing I want them to think.
So, I'm stuck here for awhile.
not that it's a bad thing, but i'm feeling overwhelmed and stressed. and it's just not a good fit. but maybe I'll actually start learning things. and I suppose that's also why I took the job.
So, ever onward, I guess.
It would just be nice if I could take some of the happy along with me.7 -
Okay, story time.
Back during 2016, I decided to do a little experiment to test the viability of multithreading in a JavaScript server stack, and I'm not talking about the Node.js way of queuing I/O on background threads, or about WebWorkers that box and convert your arguments to JSON and back during a simple call across two JS contexts.
I'm talking about JavaScript code running concurrently on all cores. I'm talking about replacing the god-awful single-threaded event loop of ECMAScript – the biggest bottleneck in software history – with an honest-to-god, lock-free thread-pool scheduler that executes JS code in parallel, on all cores.
I'm talking about concurrent access to shared mutable state – a big, rightfully-hated mess when done badly – in JavaScript.
This rant is about the many mistakes I made at the time, specifically the biggest – but not the first – of which: publishing some preliminary results very early on.
Every time I showed my work to a JavaScript developer, I'd get negative feedback. Like, unjustified hatred and immediate denial, or outright rejection of the entire concept. Some were even adamantly trying to discourage me from this project.
So I posted a sarcastic question to the Software Engineering Stack Exchange, which was originally worded differently to reflect my frustration, but was later edited by mods to be more serious.
You can see the responses for yourself here: https://goo.gl/poHKpK
Most of the serious answers were along the lines of "multithreading is hard". The top voted response started with this statement: "1) Multithreading is extremely hard, and unfortunately the way you've presented this idea so far implies you're severely underestimating how hard it is."
While I'll admit that my presentation was initially lacking, I later made an entire page to explain the synchronisation mechanism in place, and you can read more about it here, if you're interested:
http://nexusjs.com/architecture/
But what really shocked me was that I had never understood the mindset that all the naysayers adopted until I read that response.
Because the bottom-line of that entire response is an argument: an argument against change.
The average JavaScript developer doesn't want a multithreaded server platform for JavaScript because it means a change of the status quo.
And this is exactly why I started this project. I wanted a highly performant JavaScript platform for servers that's more suitable for real-time applications like transcoding, video streaming, and machine learning.
Nexus does not and will not hold your hand. It will not repeat Node's mistakes and give you nice ways to shoot yourself in the foot later, like `process.on('uncaughtException', ...)` for a catch-all global error handling solution.
No, an uncaught exception will be dealt with like any other self-respecting language: by not ignoring the problem and pretending it doesn't exist. If you write bad code, your program will crash, and you can't rectify a bug in your code by ignoring its presence entirely and using duct tape to scrape something together.
Back on the topic of multithreading, though. Multithreading is known to be hard, that's true. But how do you deal with a difficult solution? You simplify it and break it down, not just disregard it completely; because multithreading has its great advantages, too.
Like, how about we talk performance?
How about distributed algorithms that don't waste 40% of their computing power on agent communication and pointless overhead (like the serialisation/deserialisation of messages across the execution boundary for every single call)?
How about vertical scaling without forking the entire address space (and thus multiplying your application's memory consumption by the number of cores you wish to use)?
How about utilising logical CPUs to the fullest extent, and allowing them to execute JavaScript? Something that isn't even possible with the current model implemented by Node?
Some will say that the performance gains aren't worth the risk. That the possibility of race conditions and deadlocks aren't worth it.
That's the point of cooperative multithreading. It is a way to smartly work around these issues.
If you use promises, they will execute in parallel, to the best of the scheduler's abilities, and if you chain them then they will run consecutively as planned according to their dependency graph.
If your code doesn't access global variables or shared closure variables, or your promises only deal with their provided inputs without side-effects, then no contention will *ever* occur.
If you only read and never modify globals, no contention will ever occur.
Are you seeing the same trend I'm seeing?
Good JavaScript programming practices miraculously coincide with the best practices of thread-safety.
When someone says we shouldn't use multithreading because it's hard, do you know what I like to say to that?
"To multithread, you need a pair."18 -
So I had to buy a laptop
I had a Lenovo G50-70 (2014) it was really good, mid-range i5, 6GB of RAM and to speed it up I've installed an SSD. 3 years (and some formatting) later it started being slow, I couldn't even play Minecraft anymore (and the battery was dying)
This year we're learning and using PCs for the first (real) time, so I needed a good laptop.
I started searching everywhere, Google, shops, Amazon... (Excluding HP and Acers for good reasons 😂) so I've found this Asus, which isn't really famous, but the model is UX510UX.
FHD display, Nvidia 950M, i7-7500U, really good battery, 8GB of RAM, 1TB of hard disk
I bought an additional 128GB M.2 SSD from WD (I'm sorry but I can't live with a HDD 😂)
All of this for ~700€
I'm very happy for the purchase, the laptop is all metal (pinky bronze, but I like it)
It's really powerful, I can play GTA with mid-high details without any lag. (Remember, I play GTA on the desktop with a 1080 Ti so my standards are high)
I'm happy 😄!22 -
"So Alecx, how did you solve the issues with the data provided to you by hr for <X> application?"
Said the VP of my institution in charge of my department.
"It was complex sir, I could not figure out much of the general ideas of the data schema since it came from a bunch of people not trained in I.T (HR) and as such I had to do some experiments in the data to find the relationships with the data, this brought about 4 different relations in the data, the program determined them for me based on the most common type of data, the model deemed it a "user", from that I just extracted the information that I needed, and generated the tables through Golang's gorm"
VP nodding and listening intently...."how did you make those relationships?" me "I started a simple pattern recognition module through supervised mach..." VP: Machine learning, that sounds like A.I
Me: "Yes sir, it was, but the problem was fairly easy for the schema to determ.." VP: A.I, at our institution, back in my day it was a dream to have such technology, you are the director of web tech, what is it to you to know of this?"
Me: "I just like to experiment with new stuff, it was the easiest rout to determine these things, I just felt that i should use it if I can"
VP: "This is amazing, I'll go by your office later"
Dude speaks wonders of me. The idea was simple, read through the CSV that was provided to me, have the parsing done in a notebook, make it determine the relationships in the data and spout out a bunch of JSON that I could use. Hook it up to a simple gorm golang script and generate the tables for that. Much simpler than the bullshit that we have in php. I used this to create a new database since the previous application had issues. The app will still have a php frontend and backend, but now I don't leave the parsing of the data to php, which quite frankly, php sucks for imho. The Python codebase will then create the json files through the predictive modeling (98% accuaracy) and then the go program will populate the db for me.
There are also some node scripts that help test the data since the data is json.
All in all a good day of work. The VP seems scared since he knows no one on this side of town knows about this kind of tech. Me? I am just happy I get to experiment. Y'all should have seen his face when I showed him a rather large app written in Clojure, the man just went 0.0 when he saw Lisp code.
I think I scare him.12 -
As you can see from the screenshot, its working.
The system is actually learning the associations between the digit sequence of semiprime hidden variables and known variables.
Training loss and value loss are super high at the moment and I'm using an absurdly small training set (10k sequence pairs). I'm running on the assumption that there is a very strong correlation between the structures (and that it isn't just all ephemeral).
This initial run is just to see if training an machine learning model is a viable approach.
Won't know for a while. Training loss could get very low (thats a good thing, indicating actual learning), only for it to spike later on, and if it does, I won't know if the sample size is too small, or if I need to do more training, or if the problem is actually intractable.
If or when that happens I'll experiment with different configurations like batch sizes, and more epochs, as well as upping the training set incrementally.
Either case, once the initial model is trained, I need to test it on samples never seen before (products I want to factor) and see if it generates some or all of the digits needed for rapid factorization.
Even partial digits would be a success here.
And I expect to create multiple training sets for each semiprime product and its unknown internal variables versus deriable known variables. The intersections of the sets, and what digits they have in common might be the best shot available for factorizing very large numbers in this approach.
Regardless, once I see that the model works at the small scale, the next step will be to increase the scope of the training data, and begin building out the distributed training platform so I can cut down the training time on a larger model.
I also want to train on random products of very large primes, just for variety and see what happens with that. But everything appears to be working. Working way better than I expected.
The model is running and learning to factorize primes from the set of identities I've been exploring for the last three fucking years.
Feels like things are paying off finally.
Will post updates specifically to this rant as they come. Probably once a day.2 -
Frustration Rant!
Because old hardware means learning the hard way sometimes, I've had to purchase more goodies.
On my last update, I installed the rs232 shield which may have inadvertently been wired backwards for Tx/Rx from what im used to. I assume it is backwards to most db9 serial ports because most Arduino or other projects you would do with a pi have serial "in" connections like old routers and devices that would be "controlled" rather than the other way around. Anyway, according to a video on youtube showing a guy turning an old machine into an IRC client via raspberry pi, this shield may be swapped. That means that instead of interfacing with the old machine via a null modem crossover cable, I need a straight cable with male db9 on both ends. I unfortunately tried using the null modem crossover cable which was reversing the reversed pins all over again. I hope these next few days are more fruitful now that I've bought a straight cable and db9/25 adapter.
The good thing is that I managed to get the pi to recognize its new serial port. I also dusted off my DOS skills and my serial card in the 5150 seems to work.
I literally banged my head after nothing worked. Im hoping that the tx/ Rx is solved soon.
Oh and that AT to PS/2 adapter will allow me to use by IBM original Model M Keyboarf rather than the fun model F. -
Im deploying a Machine Learning Model to production. We dont have an automated deployment pipeline for the models, so we do it manually exposing the models through a rest api.
I asked for the model artifact to the DS, they didn't have permissions to download files from Databricks.
I asked their manager for the artifact. He told me that he has the permissions, then bullshitted me with something about the formal process, some shit about proper permissions handling, and that they do no have a standard process for sharing files right now so i should wait.
I was like "bro, share the artifact with me to unblock my work, then stablish your process, i dont care". He said no, and just after that he started a thread involving half of the middle management and data engineers asking for feedback on how to stablish a process for sharing databricks files. Just Wtf.
I got pissed, i reach out to his superior (good friend of mine), that was on vacation btw, and i told him the situation. He opened slack and humiliated him so bad, that i almost felt bad for the manager jajajajaja.
I grabbed my model artifact and got out of there instantly.2 -
python machine learning tutorials:
- import preprocessed dataset in perfect format specially crafted to match the model instead of reading from file like an actual real life would work
- use images data for recurrent neural network and see no problem
- use Conv1D for 2d input data like images
- use two letter variable names that only tutorial creator knows what they mean.
- do 10 data transformation in 1 line with no explanation of what is going on
- just enter these magic words
- okey guys thanks for watching make sure to hit that subscribe button
ehh, the machine learning ecosystem is burning pile of shit let me give you some examples:
- thanks to years of object oriented programming research and most wonderful abstractions we have "loss.backward()" which have no apparent connection to model but it affects the model, good to know
- cannot install the python packages because python must be >= 3.9 and at the same time < 3.9
- runtime error with bullshit cryptic message
- python having no data types but pytorch forces you to specify float32
- lets throw away the module name of a function with these simple tricks:
"import torch.nn.functional as F"
"import torch_geometric.transforms as T"
- tensor.detach().cpu().numpy() ???
- class NeuralNetwork(torch.nn.Module):
def __init__(self):
super(NeuralNetwork, self).__init__() ????
- lets call a function that switches on the tracking of math operations on tensors "model.train()" instead of something more indicative of the function actual effect like "model.set_mode_to_train()"
- what the fuck is ".iloc" ?
- solving environment -/- brings back memories when you could make a breakfast while the computer was turning on
- hey lets choose the slowest, most sloppy and inconsistent language ever created for high performance computing task called "data sCieNcE". but.. but. you can use numpy! I DONT GIVE A SHIT about numpy why don't you motherfuckers create a language that is inherently performant instead of calling some convoluted c++ library that requires 10s of dependencies? Why don't you create a package management system that works without me having to try random bullshit for 3 hours???
- lets set as industry standard a jupyter notebook which is not git compatible and have either 2 second latency of tab completion, no tab completion, no documentation on hover or useless documentation on hover, no way to easily redo the changes, no autosave, no error highlighting and possibility to use variable defined in a cell below in the cell above it
- lets use inconsistent variable names like "read_csv" and "isfile"
- lets pass a boolean variable as a string "true"
- lets contribute to tech enabled authoritarianism and create a face recognition and object detection models that china uses to destroy uyghur minority
- lets create a license plate computer vision system that will help government surveillance everyone, guys what a great idea
I don't want to deal with this bullshit language, bullshit ecosystem and bullshit unethical tech anymore.12 -
So I decided to run mozilla deep speech against some of my local language dataset using transfer learning from existing english model.
I adjusted alphabet and begin the learning.
I have pc with gtx1080 laying around so I utilized that but I recommend to use at least newest rtx 3080 to not waste time ( you can read about how much time it took below ).
Waited for 3 days and error goes to about ~30 so I switched the dataset and error went to about ~1 after a week.
Yeah I waited whole got damn week cause I don’t use this computer daily.
So I picked some audio from youtube to translate speech to text and it works a little. It’s not a masterpiece and I didn’t tested it extensively also didn’t fine tuned it but it works as I expected. It recognizes some words perfectly, other recognize partially, other don’t recognize.
I stopped test at this point as I don’t have any business use or plans for this but probably I’m one of the couple of companies / people right now who have my native language speech to text machine learning model.
I was doing transfer learning for the first time, also first time training from audio and waiting for results for such long time. I can say I’m now convinced that ML is something big.
To sum up, probably with right amount of money and time - about 1-3 months you can make decent speech to text software at home that will work good with your accent and native language. -
This is gonna be a long post, and inevitably DR will mutilate my line breaks, so bear with me.
Also I cut out a bunch because the length was overlimit, so I'll post the second half later.
I'm annoyed because it appears the current stablediffusion trend has thrown the baby out with the bath water. I'll explain that in a moment.
As you all know I like to make extraordinary claims with little proof, sometimes
for shits and giggles, and sometimes because I'm just delusional apparently.
One of my legit 'claims to fame' is, on the theoretical level, I predicted
most of the developments in AI over the last 10+ years, down to key insights.
I've never had the math background for it, but I understood the ideas I
was working with at a conceptual level. Part of this flowed from powering
through literal (god I hate that word) hundreds of research papers a year, because I'm an obsessive like that. And I had to power through them, because
a lot of the technical low-level details were beyond my reach, but architecturally
I started to see a lot of patterns, and begin to grasp the general thrust
of where research and development *needed* to go.
In any case, I'm looking at stablediffusion and what occurs to me is that we've almost entirely thrown out GANs. As some or most of you may know, a GAN is
where networks compete, one to generate outputs that look real, another
to discern which is real, and by the process of competition, improve the ability
to generate a convincing fake, and to discern one. Imagine a self-sharpening knife and you get the idea.
Well, when we went to the diffusion method, upscaling noise (essentially a form of controlled pareidolia using autoencoders over seq2seq models) we threw out
GANs.
We also threw out online learning. The models only grow on the backend.
This doesn't help anyone but those corporations that have massive funding
to create and train models. They get to decide how the models 'think', what their
biases are, and what topics or subjects they cover. This is no good long run,
but thats more of an ideological argument. Thats not the real problem.
The problem is they've once again gimped the research, chosen a suboptimal
trap for the direction of development.
What interested me early on in the lottery ticket theory was the implications.
The lottery ticket theory says that, part of the reason *some* RANDOM initializations of a network train/predict better than others, is essentially
down to a small pool of subgraphs that happened, by pure luck, to chance on
initialization that just so happened to be the right 'lottery numbers' as it were, for training quickly.
The first implication of this, is that the bigger a network therefore, the greater the chance of these lucky subgraphs occurring. Whether the density grows
faster than the density of the 'unlucky' or average subgraphs, is another matter.
From this though, they realized what they could do was search out these subgraphs, and prune many of the worst or average performing neighbor graphs, without meaningful loss in model performance. Essentially they could *shrink down* things like chatGPT and BERT.
The second implication was more sublte and overlooked, and still is.
The existence of lucky subnetworks might suggest nothing additional--In which case the implication is that *any* subnet could *technically*, by transfer learning, be 'lucky' and train fast or be particularly good for some unknown task.
INSTEAD however, what has happened is we haven't really seen that. What this means is actually pretty startling. It has two possible implications, either of which will have significant outcomes on the research sooner or later:
1. there is an 'island' of network size, beyond what we've currently achieved,
where networks that are currently state of the3 art at some things, rapidly converge to state-of-the-art *generalists* in nearly *all* task, regardless of input. What this would look like at first, is a gradual drop off in gains of the current approach, characterized as a potential new "ai winter", or a "limit to the current approach", which wouldn't actually be the limit, but a saddle point in its utility across domains and its intelligence (for some measure and definition of 'intelligence').4 -
After examining my model and code closer, albeit as rudimentary as it is, I'm now pretty certain I didnt fuck up the encoder, which means the system is in fact learning the training sequence pairs, which means it SHOULD be able to actually factor semiprimes.
And looking at this, it means I fucked up the decoder and I have a good idea how. So the next step is to fix that.
I'm also fairly certain, using a random forest approach, namely multiple models trained on other variables derived from the semiprimes, I can extend the method, and lower loss to the point where I can factor 400 digit numbers instead of 20 digit numbers. -
I'm writing all the dev things I know in a docs site as a means to be hireable should I need to switch jobs.
I'm not gonna go too deep on how I'm doing it. One style I'm enjoying is making every article take only one page long, and if they take longer, maybe consider breaking it into another article.
Fuck long articles. Yes, that's a bit autistic.
But I will describe the challenges I'm finding (which are quite many) in further detail.
One of them is that words can be ambiguous. Production can mean the production environment but it can also mean production in plain english.
And there are tons of cases like this.
Because of this, I felt a lot of confusion in my beginner days. So it my objective to write this as to prevent as much confusion as possible.
Granted, I don't want to write "development for dummies". Software is complex. But because it's complex on its own, I don't want to add complexity to the learning process through obscure language usage.
"Fine", I say, "I'll disambiguate". But this means I find myself branching out very often into fundamental or commonly used software terms like "framework", "model", "scaffold", "algorithm", "viewport", "breakpoint", etc.
Another challenge is reaching good levels of completitude.
This means I have to explain that obscure CLI flag I never used in my life.
If I don't do this, then what makes my docs different than these superficial dev.to or medium posts? Nothing.
But trying to explain EVERYTHING about a software can generate a lot of frustration: I never finish.
It also makes me wonder "do I even know shit?". I think some amount of insecurity is healthy and pushes myself forward.
But at some point it's kind of making me feel like shit. Maybe I just need to keep learning.1 -
What do you think of Elixir + Phoenix to build API’s? Is it a better choice than a more established language like Python or something more new like Scala or Clojure?
At my company we're going through a watershed moment where we're starting to discuss and think about re-building our digital foundations and nothing is off limits. I'm leading the discussion about our architecture where everyone can have their say into what the future looks like for our applications. We're currently on a Drupal (CMS) + PHP7/Symfony (Backend Content Repository) + Symfony Twig templates (Frontend)
Even though I have been developing in PHP most of my career, I personally love Elixir and spend a lot of my time away from work learning it but many of my reasons feels subjective like pattern matching, it's actor concurrency model, immutable data and not having to deal with classes/objects, and I'm not entirely sure how that translates to business value, advocating successfully for a tech stack change requires solid reasoning and good answers to challenges like how do we find Elixir developers when existing devs leave, how easy is it to build a CI/CD pipeline for Elixir/Phoenix, etc.4 -
I need some advice to avoid stressing myself out. I'm in a situation where I feel stuck between a rock and a hard place at work, and it feels like there's no one to turn to. This is a long one, because context is needed.
I've been working on a fairly big CMS based website for a few years that's turned into multiple solutions that I'm more or less responsible for. During that time I've been optimizing the code base with proper design patterns, setting up continuous delivery, updating packaging etc. because I care that the next developer can quickly grasp what's going on, should they take over the project in the future. During that time I've been accused of over-engineering, which to an extent is true. It's something I've gotten a lot better at over the years, but I'm only human and error prone, so sometimes that's just how it is.
Anyways, after a few years of working on the project I get a new colleague that's going to help me on my CMS projects. It doesn't take long for me to realize that their code style is a mess. Inconsistent line breaks and naming conventions, really god awful anti-pattern code. There's no attempt to mimic the code style I've been using throughout the project, it's just complete chaos. The code "works", although it's not something I'd call production code. But they're new and learning, so I just sort of deal with it and remain patient, pointing out where they could optimize their code, teaching them basic object oriented design patterns like... just using freaking objects once in a while.
Fast forward a few years until now. They've learned nothing. Every time I read their code it's the same mess it's always been.
Concrete example: a part of the project uses Vue to render some common components in the frontend. Looking through the code, there is currently *no* attempt to include any air between functions, or any part of the code for that matter. Everything gets transpiled and minified so there's absolutely NO REASON to "compress" the code like this. Furthermore, they have often directly manipulated the DOM from the JavaScript code rather than rendering the component based on the model state. Completely rendering the use of Vue pointless.
And this is just the frontend part of the code. The backend is often orders of magnitude worse. They will - COMPLETELY RANDOMLY - sometimes leave in 5-10 lines of whitespace for no discernable reason. It frustrates me to no end. I keep asking them to verify their staged changes before every commit, but nothing changes. They also blatantly copy/paste bits of my code to other components without thinking about what they do. So I'll have this random bit of backend code that injects 3-5 dependencies there's simply no reason for and aren't being used. When I ask why they put them there I simply get a “I don't know, I just did it like you did it”.
I simply cannot trust this person to write production code, and the more I let them take over things, the more the technical debt we accumulate. I have talked to my boss about this, and things have improved, but nowhere near where I need it to be.
On the other side of this are my project manager and my boss. They, of course, both want me to implement solutions with low estimates, and as fast and simply as possible. Which would be fine if I wasn't the only person fighting against this technical debt on my team. Add in the fact that specs are oftentimes VERY implicit, so I'm stuck guessing what we actually need and having to constantly ask if this or that feature should exist.
And then, out of nowhere, I get assigned a another project after some colleague quits, during a time I’m already overbooked. The project is very complex and I'm expected to give estimates on tasks that would take me several hours just to research.
I'm super stressed and have no one I can turn to for help, hence this post. I haven't put the people in this post in the best light, but they're honestly good people that I genuinely like. I just want to write good code, but it's like I have to fight for my right to do it.1 -
Trying to teach my friend, who has already graduated college, enough web dev stuff to land an internship and build a career. I can tell he's nervous because he's always asking how close he is to landing an internship.
I remember being there, wanting concrete answers but only hearing to just keep learning. Now that the shoe is on the other foot I understand. Listening to him explain what he knows so far makes me feel slightly nostalgic but also slightly concerned if he'll be able to learn enough soon enough.
He's been using codeacademy to learn and leaning on me a little, but I really need to boost his learning if he's gonna end up anywhere any time soon. He's familiar with HTML and basic CSS stuff (box model is still iffy, for example) and he's trying to grasp JS. Definitely not there yet, but have no idea when I can start telling him he's in good shape.1 -
Critical Tips to Learn Programming Faster Sample:
Be comfortable with basics
The mistake which many aspiring students make is to start in a rush and skip the basics of programming and its fundamentals. They tend to start from the comparatively advanced topics.
This tends to work in many sectors and fields of Technology, but in the world of programming, having a deep knowledge of the basic principles of coding and programming is a must. If you are taking a class through a tutor and you feel that they are going too fast for your understanding, you need to be firm and clear and tell them to go slowly, so that you can also be on the same page like everyone else
Most often than not, many people tend to struggle when they reach a higher level with a feeling of getting lost, then they feel the need to fall back and go through basics, which is time-consuming. Learning basics well is the key to be fast and accurate in programming.
Practice to code by hand.
This may sound strange to some of you. Why write a code by hand when the actual work is supposed to be done on a computer? There are some reasons for this.
One reason being, when you were to be called for an interview for a programming job, the technical evaluation will include a hand-coding round to assess your programming skills. It makes sense as experts have researched and found that coding by hand is the best way to learn how to program.
Be brave and fiddle with codes
Most of us try to stick to the line of instructions given to us by our seniors, but it is extremely important to think out of the box and fiddle around with codes. That way, you will learn how the results get altered with the changes in the code.
Don't be over-ambitious and change the whole code. It takes experience to reach that level. This will give you enormous confidence in your skillset
Reach out for guidance
Seeking help from professionals is never looked down upon. Your fellow mates will likely not feel a hitch while sharing their knowledge with you. They also have been in your position at some point in their career and help will be forthcoming.
You may need professional help in understanding the program, bugs in the program and how to debug it. Sometimes other people can identify the bug instantly, which may have escaped your attention. Don't be shy and think that they'll make of you. It's always a team effort. Be comfortable around your colleagues.
Don’t Burn-out
You must have seen people burning the midnight oil and not coming to a conclusion, hence being reported by the testing team or the client.
These are common occurrences in the IT Industry. It is really important to conserve energy and take regular breaks while learning or working. It improves concentration and may help you see solutions faster. It's a proven fact that taking a break while working helps with better results and productivity. To be a better programmer, you need to be well rested and have an active mind.
Go Online
It's a common misconception that learning how to program will take a lot of money, which is not true. There are plenty of online college courses designed for beginner students and programmers. Many free courses are also available online to help you become a better programmer. Websites like Udemy and programming hub is beneficial if you want to improve your skills.
There are free courses available for everything from [HTML](https://bitdegree.org/learn/...) to CSS. You can use these free courses to get a piece of good basic knowledge. After cementing your skills, you can go for complex paid courses.
Read Relevant Material
One should never stop acquiring knowledge. This could be an extension of the last point, but it is in a different context. The idea is to boost your knowledge about the domain you're working on.
In real-life situations, the client for which you're writing a program for possesses complete knowledge of their business, how it works, but they don't know how to write a code for some specific program and vice versa.
So, it is crucial to keep yourself updated about the recent trends and advancements. It is beneficial to know about the business for which you're working. Read relevant material online, read books and articles to keep yourself up-to-date.
Never stop practicing
The saying “practice makes perfect” holds no matter what profession you are in. One should never stop practicing, it's a path to success. In programming, it gets even more critical to practice, since your exposure to programming starts with books and courses you take. Real work is done hands-on, you must spend time writing codes by hand and practicing them on your system to get familiar with the interface and workflow.
Search for mock projects online or make your model projects to practice coding and attentively commit to it. Things will start to come in the structure after some time.4