Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "use machine learning this time"
-
3 rants for the price of 1, isn't that a great deal!
1. HP, you braindead fucking morons!!!
So recently I disassembled this HP laptop of mine to unfuck it at the hardware level. Some issues with the hinge that I had to solve. So I had to disassemble not only the bottom of the laptop but also the display panel itself. Turns out that HP - being the certified enganeers they are - made the following fuckups, with probably many more that I didn't even notice yet.
- They used fucking glue to ensure that the bottom of the display frame stays connected to the panel. Cheap solution to what should've been "MAKE A FUCKING DECENT FRAME?!" but a royal pain in the ass to disassemble. Luckily I was careful and didn't damage the panel, but the chance of that happening was most certainly nonzero.
- They connected the ribbon cables for the keyboard in such a way that you have to reach all the way into the spacing between the keyboard and the motherboard to connect the bloody things. And some extra spacing on the ribbon cables to enable servicing with some room for actually connecting the bloody things easily.. as Carlos Mantos would say it - M-m-M, nonoNO!!!
- Oh and let's not forget an old flaw that I noticed ages ago in this turd. The CPU goes straight to 70°C during boot-up but turning on the fan.. again, M-m-M, nonoNO!!! Let's just get the bloody thing to overheat, freeze completely and force the user to power cycle the machine, right? That's gonna be a great way to make them satisfied, RIGHT?! NO MOTHERFUCKERS, AND I WILL DISCONNECT THE DATA LINES OF THIS FUCKING THING TO MAKE IT SPIN ALL THE TIME, AS IT SHOULD!!! Certified fucking braindead abominations of engineers!!!
Oh and not only that, this laptop is outperformed by a Raspberry Pi 3B in performance, thermals, price and product quality.. A FUCKING SINGLE BOARD COMPUTER!!! Isn't that a great joke. Someone here mentioned earlier that HP and Acer seem to have been competing for a long time to make the shittiest products possible, and boy they fucking do. If there's anything that makes both of those shitcompanies remarkable, that'd be it.
2. If I want to conduct a pentest, I don't want to have to relearn the bloody tool!
Recently I did a Burp Suite test to see how the devRant web app logs in, but due to my Burp Suite being the community edition, I couldn't save it. Fucking amazing, thanks PortSwigger! And I couldn't recreate the results anymore due to what I think is a change in the web app. But I'll get back to that later.
So I fired up bettercap (which works at lower network layers and can conduct ARP poisoning and DNS cache poisoning) with the intent to ARP poison my phone and get the results straight from the devRant Android app. I haven't used this tool since around 2017 due to the fact that I kinda lost interest in offensive security. When I fired it up again a few days ago in my PTbox (which is a VM somewhere else on the network) and today again in my newly recovered HP laptop, I noticed that both hosts now have an updated version of bettercap, in which the options completely changed. It's now got different command-line switches and some interactive mode. Needless to say, I have no idea how to use this bloody thing anymore and don't feel like learning it all over again for a single test. Maybe this is why users often dislike changes to the UI, and why some sysadmins refrain from updating their servers? When you have users of any kind, you should at all times honor their installations, give them time to change their individual configurations - tell them that they should! - in other words give them a grace time, and allow for backwards compatibility for as long as feasible.
3. devRant web app!!
As mentioned earlier I tried to scrape the web app's login flow with Burp Suite but every time that I try to log in with its proxy enabled, it doesn't open the login form but instead just makes a GET request to /feed/top/month?login=1 without ever allowing me to actually log in. This happens in both Chromium and Firefox, in Windows and Arch Linux. Clearly this is a change to the web app, and a very undesirable one. Especially considering that the login flow for the API isn't documented anywhere as far as I know.
So, can this update to the web app be rolled back, merged back to an older version of that login flow or can I at least know how I'm supposed to log in to this API in order to be able to start developing my own client?6 -
Okay, story time.
Back during 2016, I decided to do a little experiment to test the viability of multithreading in a JavaScript server stack, and I'm not talking about the Node.js way of queuing I/O on background threads, or about WebWorkers that box and convert your arguments to JSON and back during a simple call across two JS contexts.
I'm talking about JavaScript code running concurrently on all cores. I'm talking about replacing the god-awful single-threaded event loop of ECMAScript – the biggest bottleneck in software history – with an honest-to-god, lock-free thread-pool scheduler that executes JS code in parallel, on all cores.
I'm talking about concurrent access to shared mutable state – a big, rightfully-hated mess when done badly – in JavaScript.
This rant is about the many mistakes I made at the time, specifically the biggest – but not the first – of which: publishing some preliminary results very early on.
Every time I showed my work to a JavaScript developer, I'd get negative feedback. Like, unjustified hatred and immediate denial, or outright rejection of the entire concept. Some were even adamantly trying to discourage me from this project.
So I posted a sarcastic question to the Software Engineering Stack Exchange, which was originally worded differently to reflect my frustration, but was later edited by mods to be more serious.
You can see the responses for yourself here: https://goo.gl/poHKpK
Most of the serious answers were along the lines of "multithreading is hard". The top voted response started with this statement: "1) Multithreading is extremely hard, and unfortunately the way you've presented this idea so far implies you're severely underestimating how hard it is."
While I'll admit that my presentation was initially lacking, I later made an entire page to explain the synchronisation mechanism in place, and you can read more about it here, if you're interested:
http://nexusjs.com/architecture/
But what really shocked me was that I had never understood the mindset that all the naysayers adopted until I read that response.
Because the bottom-line of that entire response is an argument: an argument against change.
The average JavaScript developer doesn't want a multithreaded server platform for JavaScript because it means a change of the status quo.
And this is exactly why I started this project. I wanted a highly performant JavaScript platform for servers that's more suitable for real-time applications like transcoding, video streaming, and machine learning.
Nexus does not and will not hold your hand. It will not repeat Node's mistakes and give you nice ways to shoot yourself in the foot later, like `process.on('uncaughtException', ...)` for a catch-all global error handling solution.
No, an uncaught exception will be dealt with like any other self-respecting language: by not ignoring the problem and pretending it doesn't exist. If you write bad code, your program will crash, and you can't rectify a bug in your code by ignoring its presence entirely and using duct tape to scrape something together.
Back on the topic of multithreading, though. Multithreading is known to be hard, that's true. But how do you deal with a difficult solution? You simplify it and break it down, not just disregard it completely; because multithreading has its great advantages, too.
Like, how about we talk performance?
How about distributed algorithms that don't waste 40% of their computing power on agent communication and pointless overhead (like the serialisation/deserialisation of messages across the execution boundary for every single call)?
How about vertical scaling without forking the entire address space (and thus multiplying your application's memory consumption by the number of cores you wish to use)?
How about utilising logical CPUs to the fullest extent, and allowing them to execute JavaScript? Something that isn't even possible with the current model implemented by Node?
Some will say that the performance gains aren't worth the risk. That the possibility of race conditions and deadlocks aren't worth it.
That's the point of cooperative multithreading. It is a way to smartly work around these issues.
If you use promises, they will execute in parallel, to the best of the scheduler's abilities, and if you chain them then they will run consecutively as planned according to their dependency graph.
If your code doesn't access global variables or shared closure variables, or your promises only deal with their provided inputs without side-effects, then no contention will *ever* occur.
If you only read and never modify globals, no contention will ever occur.
Are you seeing the same trend I'm seeing?
Good JavaScript programming practices miraculously coincide with the best practices of thread-safety.
When someone says we shouldn't use multithreading because it's hard, do you know what I like to say to that?
"To multithread, you need a pair."18 -
Ok friends let's try to compile Flownet2 with Torch. It's made by NVIDIA themselves so there won't be any problem at all with dependencies right?????? /s
Let's use Deep Learning AMI with a K80 on AWS, totally updated and ready to go super great always works with everything else.
> CUDA error
> CuDNN version mismatch
> CUDA versions overwrite
> Library paths not updated ever
> Torch 0.4.1 doesn't work so have to go back to Torch 0.4
> Flownet doesn't compile, get bunch of CUDA errors piece of shit code
> online forums have lots of questions and 0 answers
> Decide to skip straight to vid2vid
> More cuda errors
> Can't compile the fucking 2d kernel
> Through some act of God reinstalling cuda and CuDNN, manage to finally compile Flownet2
> Try running
> "Kernel image" error
> excusemewhatthefuck.jpg
> Try without a label map because fuck it the instructions and flags they gave are basically guaranteed not to work, it's fucking Nvidia amirite
> Enormous fucking CUDA error and Torch error, makes no sense, online no one agrees and 0 answers again
> Try again but this time on a clean machine
> Still no go
> Last resort, use the docker image they themselves provided of flownet
> Same fucking error
> While in the process of debugging, realize my training image set is also bound to have bad results because "directly concatenating" images together as they claim in the paper actually has horrible results, and the network doesn't accept 6 channel input no matter what, so the only way to get around this is to make 2 images (3 * 2 = 6 quick maths)
> Fix my training data, fuck Nvidia dude who gave me wrong info
> Try again
> Same fucking errors
> Doesn't give nay helpful information, just spits out a bunch of fucking memory addresses and long function names from the CUDA core
> Try reinstalling and then making a basic torch network, works perfectly fine
> FINALLY.png
> Setup vid2vid and flownet again
> SAME FUCKING ERROR
> Try to build the entire network in tensorflow
> CUDA error
> CuDNN version mismatch
> Doesn't work with TF
> HAVE TO FUCKING DOWNGEADE DRIVERS TOO
> TF doesn't support latest cuda because no one in the ML community can be bothered to support anything other than their own machine
> After setting up everything again, realize have no space left on 75gb machine
> Try torch again, hoping that the entire change will fix things
At this point I'll leave a space so you can try to guess what happened next before seeing the result.
Ready?
3
2
1
> SAME FUCKING ERROR
In conclusion, NVIDIA is a fucking piece of shit that can't make their own libraries compatible with themselves, and can't be fucked to write instructions that actually work.
If anyone has vid2vid working or has gotten around the kernel image error for AWS K80s please throw me a lifeline, in exchange you can have my soul or what little is left of it5 -
Running a fucking conda environment on windows (an update environment from the previous one that I normally use) gets to be a fucking pain in the fucking ass for no fucking reason.
First: Generate a new conda environment, for FUCKING SHITS AND GIGGLES, DO NOT SPECIFY THE PYTHON VERSION, just to see compatibility, this was an experiment, expected to fail.
Install tensorflow on said environment: It does not fucking work, not detecting cuda, the only requirement? To have the cuda dependencies installed, modified, and inside of the system path, check done, it works on 4 other fucking environments, so why not this one.
Still doesn't work, google around and found some thread on github (the errors) that has a way to fix it, do it that way, fucking magic, shit is fixed.
Very well, tensorflow is installed and detecting cuda, no biggie. HAD TO SWITCH TO PYHTHON 3,8 BECAUSE 3.9 WAS GIVING ISSUES FOR SOME UNKNOWN FUCKING REASON
Ok no problem, done.
Install jupyter lab, for which the first in all other 4 environments it works. Guess what a fuckload of errors upon executing the import of tensorflow. They go on a loop that does not fucking end.
The error: imPoRT eRrOr thE Dll waS noT loAdeD
Ok, fucking which one? who fucking knows.
I FUCKING HATE that the main language for this fucking bullshit is python. I guess the benefits of the repl, I do, but the python repl is fucking HORSESHIT compared to the one you get on: Lisp, Ruby and fucking even NODE in which error messages are still more fucking intelligent than those of fucking bullshit ass Python.
Personally? I am betting on Julia devising a smarter environment, it is a better language already, on a second note: If you are worried about A.I taking your job, don't, it requires a team of fucktards working around common basic system administration tasks to get this bullshit running in the first place.
My dream? Julia or Scala (fuck you) for a primary language in machine learning and AI, in which entire environments, with aaaaaaaaaall of the required dlls and dependencies can be downloaded and installed upon can just fucking run. A single directory structure in which shit just fucking works (reason why I like live environments like Smalltalk, but fuck you on that too) and just run your projects from there, without setting a bunch of bullshit from environment variables, cuda dlls installation phases and what not. Something that JUST FUCKING WORKS.
I.....fucking.....HATE the level of system administration required to run fucking anything nowadays, the reason why we had to create shit like devops jobs, for the sad fuckers that have to figure out environment configurations on a box just to run software.
Fuck me man development turned to shit, this is why go mod, node npm, php composer strict folder structure pipelines were created. Bitch all you want about npm, but if I can create a node_modules setting with all of the required dlls to run a project, even if this bitch weights 2.5GB for a project structure you bet your fucking ass that I would.
"YOU JUST DON'T KNOW WHAT YOU ARE DOING" YES I FUCKING DO and I will get this bullshit fixed, I will get it running just like I did the other 4 environments that I fucking use, for different versions of cuda and python and the dependency circle jerk BULLSHIT that I have to manage. But this "follow the guide and it will work, except when it does not and you are looking into obscure github errors" bullshit just takes away from valuable project time when you have a small dedicated group of developers and no sys admin or devops mastermind to resort to.
I have successfully deployed:
Java
Golang
Clojure
Python
Node
PHP
VB/C# .NET
C++
Rails
Django
Projects, and every single fucking time (save for .net, that shit just fucking works on a dedicated windows IIS server) the shit will not work with x..nT reasons. It fucking obliterates me how fucking annoying this bullshit is. And the reason why the ENTIRE FUCKING FIELD of computer science and software engineering is so fucking flawed.
But we can't all just run to simple windows bs in which we have documentation for everything. We have to spend countless hours on fucking Linux figuring shit out (fuck you also, I have been using Linux since I was 18, I am 30 now) for which graphical drivers for machine learning, cuda and whatTheFuckNot require all sorts of sys admin gymnasts to be used.
Y'all fucked up a long time ago. Smalltalk provided an all in one, easily rollable back to previous images, easily administered interfaces for this fileFuckery bullshit, and even though the JVM and the .NET environments did their best to hold shit down, and even though we had npm packages pulling the universe inside, or gomod compiling shit into one place NOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO we had to do whatever the fuck we wanted to feel l337 and wanted.
Fuck all of you, fuck this field, fuck setting boxes for ML/AI and fuck every single OS in existence2 -
I was just writing a long rant about how my rant style changed, and how I could fix anything that annoys me in a heartbeat by just putting my mind to implementing a change. Then YouTube once again paused the synth mix that was playing on my laptop in the background, with that stupid "Video paused. Continue watching?" pop-up. I even installed an add-on for it in Firefox to make it automatically click that away. I guess that YouTube did yet another bullshit update to break that, for "totally legitimate user interface improvements" or whatever. Youtube-dl faces similar challenges all the time, and it's definitely not alone in that either. I also had issues with that on Facebook when I wanted to develop on top of that, where the UI changes every other day and the API even changes every other week. And as far as backwards compatibility goes, our way or the highway!
So I did the whole "replace and move on" type of thing. I use youtube-dl often now to get my content off YouTube into a media player that doesn't fuck me over for stupid reasons like "ad fraud" (I use an ad blocker you twats, what ads am I gonna fraud against), or "battery savings" (the damn laptop is plugged in and fully topped up for fucks sake, and you do this crap even on desktop computers). Gee I wonder why creators are moving on to Floatplane and Nebula nowadays, and why people like yours truly use "highly illegal" youtube-dl. Oh and thank you for putting me in Saudi Arabia again. Pinnacle of data mining, machine learning and other such wank could not do GeoIP. for a server that used to be in a datacenter in Italy for years, and recently has been moved to another hosting provider in Germany. It's about as unchanging and static, and as easy to geolocate as you can possibly get. But hey, kill off another Google+ when?
Like seriously, yes I'm taking your Foobar challenges and you may very well be the company I end up working for. But if anything it feels like there's a shitton of stuff to fix. And the challenges themselves still using Python 2.7 honestly feels like the seldom seen tip of the iceberg.1 -
VIM! ViM! vim! Vi Improved! Emacs (Wait ignore that one). What’s this mysterious VIM? Some believe mastering this beast will provide them with untold mastery over the forces of command line editing. Others would just like to know, how you exit the bloody thing. But in essence VIM is essentially a command line text editor at heart and it’s learning curve is so high it’s a circle.
There’s a lot of posts on the inter-webs detailing how to use that cruel mistress that is VIM. But rather then focus on how to be super productive in VIM (because honestly I’ve still not got a clue). This focus on my personal journey, my numerous attempts to use VIM in my day to day work. To eventually being able to call myself a novice.
My VIM journey started in 2010 around the same time I was transiting some of my hobby projects from SVN to GIT. It was around that time, that I attempted to run “git commit” in order to commit some files into one of my repositories.
Notice I didn’t specify the “-m” flag to provide a message. So what happened next. A wild command line editor opened in order for me to specify my message, foolish me assumed this command editor was just like similar editors such as Nano. So much CTRL + C’ing CTRL + Z’ing, CTRL + X’ing and a good measure of Google, I was finally able to exit the thing. Yeah…exit it. At this moment the measure of the complexity of this thing should be kicking in already, but it’s unfair to judge it based on today’s standards of user friendly-ness. It was born in a much simpler time. Before even the mouse graced the realms of the personal computing world.
But anyhow I’ll cut to the chase, for all of you who skipped most of the post to get to this point, it’s “:q!”. That’s the keyboard command to quit…well kinda this will quit the program. But…You know what just go here: The Manual. In-fact that’s probably not going to help either, I recommend reading on :p
My curiosity was peaked. So I went off in search of a way to understand this: VIM thing. It seemed to be pretty awesome, looking at some video’s on YouTube, I could do pretty much what Sublime text could but from the terminal. Imagine ssh’ing into a server and being able to make code edits, with full autocomplete et al. That was the dream, the practice…was something different. So I decided to make the commitment and use VIM for editing one of my existing projects.
So fired the program up and watched the world burn behind me. Ahhh…why can’t I type anything, no matter what I typed nothing seemed to appear on screen. Surely I must be missing something right? Right! After firing up the old Google machine, again it would appear there is this concept known as modes. When VIm starts up it defaults to a mode called “Normal” mode, hitting keys in this mode executes commands. But “Insert” entered by hitting the “i” key allows one to insert text.
Finally I thought I think I understand how this VIM thing works, I can just use “insert” mode to insert text and the arrow keys to move around. Then when I want to execute a command, I just press “Esc” and the command such as the one for saving the file. So there I was happily editing my code using “Insert” mode and the arrow keys, but little did I know that my happiness would be short lived, the arrow keys were soon to be a thorn in my VIM journey.
Join me for part two of this rant in which we learn the untold truth about arrow keys, touch typing and vimrc created from scratch. Until next time..
:q!4 -
This year I'm remaking my chatbot, a new improved chatbot.
I mean it's not new. But it's something I've always wanted to do personally since the advent of my original Chatbot (poorly written in batch).
If you know me at all, then you know that the actual reason for me getting into programming was because I made my own chatbot using windows batch (I was 14 and didn't know a lick about programming or available programming languages...)
Anyways, I'm wanting to dive back to my roots and build something from the ground up, but this time build it with all the knowledge I've accumulated since I've designed my first program.2 -
# Retrospective as Backend engineer
Once upon a time, I was rejected by a startup who tries to snag me from another company that I was working with.
They are looking for Senior / Supervisor level backend engineer and my profile looks like a fit for them.
So they contacted me, arranged a technical test, system design test, and interview with their lead backend engineer who also happens to be co-founder of the startup.
## The Interview
As usual, they asked me what are my contribution to previous workplace.
I answered them with achievements that I think are the best for each company that I worked with, and how to technologically achieve them.
One of it includes designing and implementing a `CQRS+ES` system in the backend.
With complete capability of what I `brag` as `Time Machine` through replaying event.
## The Rejection
And of course I was rejected by the startup, maybe specifically by the co-founder. As I asked around on the reason of rejection from an insider.
They insisted I am a guy who overengineer thing that are not needed, by doing `CQRS+ES`, and only suitable for RND, non-production stuffs.
Nobody needs that kind of `Time Machine`.
## Ironically
After switching jobs (to another company), becoming fullstack developer, learning about react and redux.
I can reflect back on this past experience and say this:
The same company that says `CQRS+ES` is an over engineering, also uses `React+Redux`.
Never did they realize the concept behind `React+Redux` is very similar to `CQRS+ES`.
- Separation of concern
- CQRS: `Command` is separated from `Query`
- Redux: Side effect / `Action` in `Thunk` separated from the presentation
- Managing State of Application
- ES: Through sequence of `Event` produced by `Command`
- Redux: Through action data produced / dispatched by `Action`
- Replayability
- ES: Through replaying `Event` into the `Applier`
- Redux: Through replay `Action` which trigger dispatch to `Reducer`
---
The same company that says `CQRS` is an over engineering also uses `ElasticSearch+MySQL`.
Never did they realize they are separating `WRITE` database into `MySQL` as their `Single Source Of Truth`, and `READ` database into `ElasticSearch` is also inline with `CQRS` principle.
## Value as Backend Engineer
It's a sad days as Backend Engineer these days. At least in the country I live in.
Seems like being a backend engineer is often under-appreciated.
Company (or people) seems to think of backend engineer is the guy who ONLY makes `CRUD` API endpoint to database.
- I've heard from Fullstack engineer who comes from React background complains about Backend engineers have it easy by only doing CRUD without having to worry about application.
- The same guy fails when given task in Backend to make a simple round-robin ticketing system.
- I've seen company who only hires Fullstack engineer with strong Frontend experience, fails to have basic understanding of how SQL Transaction and Connection Pool works.
- I've seen company Fullstack engineer relies on ORM to do super complex query instead of writing proper SQL, and prefer to translate SQL into ORM query language.
- I've seen company Fullstack engineer with strong React background brags about Uncle Bob clean code but fail to know on how to do basic dependency injection.
- I've heard company who made webapp criticize my way of handling `session` through http secure cookie. Saying it's a bad practice and better to use local storage. Despite my argument of `secure` in the cookie and ability to control cookie via backend.18 -
Fellow Deviants, I need your help in understanding the importance of C++
Okay, I need to clarify a few things:
I am not a beginner or a newbie who has just entered this community...
I have been using C++ for some time and in fact, it was the language which introduced me to the world of programming... Before, I switched to Java, since I found it much better for application development...
I already know about the obvious arguments given in favour of C/C++ like how it is a much more faster and memory efficient than other languages...
But, at the same time, C/C++ exposes us and doesn't protect us from ourselves.. I hope that you understand what I mean to say..
And, I guess that it is a fair tradeoff for the kind of power and control that these languages (C/C++) provide us..
And, I also agree with the fact that it is an language that ideally suits our need, if we wish to deal with compilers, graphics, OS, etc, in the future...
But, what I really want to ask here is:
In this age and times, when hardware has advanced so much, where technically, memory efficiency or execution speeds no longer is the topmost priority... These were the reasons for which C/C++ was initially created...
In today's time, human concept of time matters more and hence, syntactical less complicated languages like Java or Python are much more preferred, especially for domains like application development or data sciences...
So, is continuing with C++, an endeavour worth sticking with in the future or is it not required...
I am talking about this issue since I am in a dilemma about the use of C++ in the future...
I would be grateful if we could talk about keeping AI, Machine Learning or Algorithms Optimisation in mind... Since, these are the fields in which I am interested in...
I know that my question could have been posted in a better way.. But, considering the chaos that is present in my mind, regarding this question doesn't allow me to do so...
Any kind of suggestion or thoughts would be welcome and much appreciated...
P.S: I currently use C++ only for competitive programming or challenges...28 -
python machine learning tutorials:
- import preprocessed dataset in perfect format specially crafted to match the model instead of reading from file like an actual real life would work
- use images data for recurrent neural network and see no problem
- use Conv1D for 2d input data like images
- use two letter variable names that only tutorial creator knows what they mean.
- do 10 data transformation in 1 line with no explanation of what is going on
- just enter these magic words
- okey guys thanks for watching make sure to hit that subscribe button
ehh, the machine learning ecosystem is burning pile of shit let me give you some examples:
- thanks to years of object oriented programming research and most wonderful abstractions we have "loss.backward()" which have no apparent connection to model but it affects the model, good to know
- cannot install the python packages because python must be >= 3.9 and at the same time < 3.9
- runtime error with bullshit cryptic message
- python having no data types but pytorch forces you to specify float32
- lets throw away the module name of a function with these simple tricks:
"import torch.nn.functional as F"
"import torch_geometric.transforms as T"
- tensor.detach().cpu().numpy() ???
- class NeuralNetwork(torch.nn.Module):
def __init__(self):
super(NeuralNetwork, self).__init__() ????
- lets call a function that switches on the tracking of math operations on tensors "model.train()" instead of something more indicative of the function actual effect like "model.set_mode_to_train()"
- what the fuck is ".iloc" ?
- solving environment -/- brings back memories when you could make a breakfast while the computer was turning on
- hey lets choose the slowest, most sloppy and inconsistent language ever created for high performance computing task called "data sCieNcE". but.. but. you can use numpy! I DONT GIVE A SHIT about numpy why don't you motherfuckers create a language that is inherently performant instead of calling some convoluted c++ library that requires 10s of dependencies? Why don't you create a package management system that works without me having to try random bullshit for 3 hours???
- lets set as industry standard a jupyter notebook which is not git compatible and have either 2 second latency of tab completion, no tab completion, no documentation on hover or useless documentation on hover, no way to easily redo the changes, no autosave, no error highlighting and possibility to use variable defined in a cell below in the cell above it
- lets use inconsistent variable names like "read_csv" and "isfile"
- lets pass a boolean variable as a string "true"
- lets contribute to tech enabled authoritarianism and create a face recognition and object detection models that china uses to destroy uyghur minority
- lets create a license plate computer vision system that will help government surveillance everyone, guys what a great idea
I don't want to deal with this bullshit language, bullshit ecosystem and bullshit unethical tech anymore.11 -
How are Coding Bootcamps and what are they like?
A little background:
I’ve been going to a University (have a year left for a CS degree) and I am so EXTREMELY frustrated. I thought I would get an education but it’s so underwhelming. 95% of it doesn’t involve programming and the classes that do are so elementary that I know more than the professors. By the end of my web design course we had been taught to center text, insert images, insert links, and how to use tables with a single day on CSS using colors.
The OOP courses are all the same, learn variables, types, conditionals, loops, classes, functions, and so forth. Python, C++, and Java. I taught all this to myself when I was 15, I’m 29 now.
I’ve recently gotten extremely interested into full stack web development. .NET Core, React, Typescript. I’m also working with Electron. I’m basically 100% self taught and spend almost every waking moment trying to learn more and apply it.
There’s only one person at my school who has the same passion as me and he’s the president at the coding club but is going into machine learning and big data (I’m the Secretary) and I just wish I could interact with more people who have the same passion. I would love to be challenged. I feel as if I spend more time trying to learn and diagnose problems then applying my knowledge because web development is so complicated when it comes to connecting everything together and I’m still relatively new to it (started like 4 months ago). I’m an extremely fast learner and extremely dedicated so I’m not worried about that being an issue.
I just really want to be a part of a community where I have people who can answer my questions and I don’t have to spend hours or days on google finding a solution to integrating Webpack or using typescript with react, and more. I want to feel challenged.
Can I get this from a boot camp? I recently listened to a podcast from Syntax and it really excited me but I don’t want to be let down again. Either way I’m finishing my degree to get that bullshit $60000 piece of paper but I wouldn’t mind taking a couple months off for something like this if it’s worth it.
I live in CO so if you have any Bootcamps in CO that you recommend, I’d love to hear it and take a trip to check it out in person.
Thanks a bunch!10 -
So I decided to run mozilla deep speech against some of my local language dataset using transfer learning from existing english model.
I adjusted alphabet and begin the learning.
I have pc with gtx1080 laying around so I utilized that but I recommend to use at least newest rtx 3080 to not waste time ( you can read about how much time it took below ).
Waited for 3 days and error goes to about ~30 so I switched the dataset and error went to about ~1 after a week.
Yeah I waited whole got damn week cause I don’t use this computer daily.
So I picked some audio from youtube to translate speech to text and it works a little. It’s not a masterpiece and I didn’t tested it extensively also didn’t fine tuned it but it works as I expected. It recognizes some words perfectly, other recognize partially, other don’t recognize.
I stopped test at this point as I don’t have any business use or plans for this but probably I’m one of the couple of companies / people right now who have my native language speech to text machine learning model.
I was doing transfer learning for the first time, also first time training from audio and waiting for results for such long time. I can say I’m now convinced that ML is something big.
To sum up, probably with right amount of money and time - about 1-3 months you can make decent speech to text software at home that will work good with your accent and native language. -
#need_help
Dear all,
I'm trying to make a choice, a choice that won't make me regret it for the few years advanced, I'm in a dilemma, I don't know which MacBook should I get for my everyday life, I currently work as an iOS developer (Learned iOS using all kinds hackintoshes, yeah I never bought a single apple computer, yet), and always have motivation to learn new stuff (from machine learning, to web development, to making games with unity (or whatever engine), hell I even like to design stuff from time to time using Photoshop, sketch, I sometimes do video editing using premiere and after effects), and I yet have to choose which laptop to get, I got only one week to make the choice so...
Here are the options:
The new MacBook Pro 2016 (Touch Bar edition):
Pros: 'Latest' and 'greatest', have thunderbolt ports which makes it (sort of) future proof, TouchId for unlocking the laptop using a fingerprint.
Cons: You need a damn dongle everywhere, no escape key (Which I use for the autocomplete feature in Xcode), and this touch bar (Which I really have no idea if i will ever use it other than the nyan cat app for 5 minutes), plus I heard about battery issues with it (don't know if they resolved it or not), fucking huge trackpad, and no fucking MagSafe!
The previous model MacBook Pro 2015:
Pros: Ports, lots of them, small trackpad (Which you don't have to worry about your palm screwing up your work), and MagSafe! (Which I honestly don't know if it'll make any difference for my usage)
Cons: has old CPU from Haswell generation (I know that it won't feel different, it's just that I like to have parts that are the 'latest')
Now some questions, for people who have the old MacBooks and new MacBooks:
For the ones with old MacBook:
If you were given the choice to replace the old MacBook for the new one for free, would you go for it?
After all this time, how's the battery performance? is it still great from the time you bought it?
Foe the ones with new MacBook:
Does the huge-ass trackpad interfere your work day?
Do you miss magsafe to a point where you really want to throw out the new laptop and go back to previous model?
Did you get used to carry out dongles everywhere?
Did you like the TouchBar? Does it help you in your everyday work? from designing to coding to whatever, do you think that now you can't live without it?
How's the battery performance?
Is programming on it joyable? or the new keyboard and touchpad are just a meh?
Strawpoll to make it easier to vote:
http://www.strawpoll.me/12856510
In addition to that I would love that you guys detail me your experience and answer some questions that I posted above, I would be very, very grateful.2 -
Today I had a full-day job interview for a junior data scientist position.
First I met the team which was only like half of everyone because apparently everyone was gone on Fridays. However the few there were really nice.
First task is to do some basic data analysis stuff even though I already spent a week on the coding challenge and sent them all my code/tasks. I log into my machine and create a new virtual environment but can't for the life of me figure out how to use the command line in windows to install packages. Turns out there is some problem with their proxy and they have to log me in on that. Then I am struggling on the keyboard because it's for a language different that my mother tongue and it takes me 3x as long to so the most simple things. All my shortcuts are out the window. Haven't a hard time typing parentheses and brackets. Start freaking out and have a panic attack mid task. I'm sweating bullets. I didn't even make it to the simple visualization tasks much less the models at the end. Time gets called and we all go to lunch and I'm freaking out on the inside the entire time. Angry at myself because I know I am better and just couldn't think.
After lunch I present my code and results from a coding challenge I did weeks prior. People from other teams get invited and I end up getting grilled for 2 hours by 15 people. Questions are flying in from all sides. They ask me almost everything I know about machine learning and some more. Under stress I forgot the name of the optimizer I used and couldn't answer some easy stuff because my mind was racing.
Right now I am on the train home and my body physically hurts. I am disappointed with myself and wish I could have shown up better. Never really froze up like this before.2 -
The hype of Artificial Intelligence and Neutral Net gets me sick by the day.
We all know that the potential power of AI’s give stock prices a bump and bolster investor confidence. But too many companies are reluctant to address its very real limits. It has evidently become a taboo to discuss AI’s shortcomings and the limitations of machine learning, neural nets, and deep learning. However, if we want to strategically deploy these technologies in enterprises, we really need to talk about its weaknesses.
AI lacks common sense. AI may be able to recognize that within a photo, there’s a man on a horse. But it probably won’t appreciate that the figures are actually a bronze sculpture of a man on a horse, not an actual man on an actual horse.
Let's consider the lesson offered by Margaret Mitchell, a research scientist at Google. Mitchell helps develop computers that can communicate about what they see and understand. As she feeds images and data to AIs, she asks them questions about what they “see.” In one case, Mitchell fed an AI lots of input about fun things and activities. When Mitchell showed the AI an image of a koala bear, it said, “Cute creature!” But when she showed the AI a picture of a house violently burning down, the AI exclaimed, “That’s awesome!”
The AI selected this response due to the orange and red colors it scanned in the photo; these fiery tones were frequently associated with positive responses in the AI’s input data set. It’s stories like these that demonstrate AI’s inevitable gaps, blind spots, and complete lack of common sense.
AI is data-hungry and brittle. Neural nets require far too much data to match human intellects. In most cases, they require thousands or millions of examples to learn from. Worse still, each time you need to recognize a new type of item, you have to start from scratch.
Algorithmic problem-solving is also severely hampered by the quality of data it’s fed. If an AI hasn’t been explicitly told how to answer a question, it can’t reason it out. It cannot respond to an unexpected change if it hasn’t been programmed to anticipate it.
Today’s business world is filled with disruptions and events—from physical to economic to political—and these disruptions require interpretation and flexibility. Algorithms alone cannot handle that.
"AI lacks intuition". Humans use intuition to navigate the physical world. When you pivot and swing to hit a tennis ball or step off a sidewalk to cross the street, you do so without a thought—things that would require a robot so much processing power that it’s almost inconceivable that we would engineer them.
Algorithms get trapped in local optima. When assigned a task, a computer program may find solutions that are close by in the search process—known as the local optimum—but fail to find the best of all possible solutions. Finding the best global solution would require understanding context and changing context, or thinking creatively about the problem and potential solutions. Humans can do that. They can connect seemingly disparate concepts and come up with out-of-the-box thinking that solves problems in novel ways. AI cannot.
"AI can’t explain itself". AI may come up with the right answers, but even researchers who train AI systems often do not understand how an algorithm reached a specific conclusion. This is very problematic when AI is used in the context of medical diagnoses, for example, or in any environment where decisions have non-trivial consequences. What the algorithm has “learned” remains a mystery to everyone. Even if the AI is right, people will not trust its analytical output.
Artificial Intelligence offers tremendous opportunities and capabilities but it can’t see the world as we humans do. All we need do is work on its weaknesses and have them sorted out rather than have it overly hyped with make-believes and ignore its limitations in plain sight.
Ref: https://thriveglobal.com/stories/...6 -
God, these people...
Little backstory. I'm making an training application and we have a MySQL database set up where some elements of the training are configured. This is so learning experts can easily change some aspects of the training without programmer's help.
Meanwhile, I'm also in the middle of a server migration, because our current server is running a lot of deprecated software and is in dire need of replacement.
This is going pretty slowly, though, because of other, high-priority, work that keeps being shoved my way.
Now, someone accidentally deletes a bunch of data from one of the schemas. No big deal in my book, the training is still in development and we have nightly backups of the database.
So I shoot a support ticket to the hosting provider and ask them to restore a specific schema, telling them to restore the image to some other machine and dump the tables in an MySQL file so I can restore it that way.
I also told them to get the backup of the OLD server, not the NEW one we're still migrating to.
About an hour later, I get a message that they dumped the schema's files in a Temp folder on the D drive. So I RDP to the server to check and... The files aren't there. Just before writing a response asking where the file is, I remembered the server I was migrating to and checked that server, and there were the files.
I had already migrated part of our databases and was testing compatibility before I moved to something else.
The hosting provider just dumped the files of the wrong server, despite me telling them exactly which server to use.
This is not the first time this hosting provider has let me down...
I'm really considering jumping to another if they keep doing this... -
Emotionally painful dev learning experience: My laptop (and only computer I had in the area) broke at the worst possible time during university and the guy fixing it fucked it up meaning it took even longer. Combine this with:
*Stuck having to learn Android Studio in two weeks to make a whole-ass app with a professor who didn't know how to make a Hello World and gave us no resources. Pair project so I had someone depending on me to do my part, meaning a lot of sharing their computer just to be able to use Android Studio.
*Having to work on another solo project by using various public and awfully specced university computers. Said project involved real-time 3D graphics and was running at about a third of the speed it should on every machine.
*Realizing how much I depended on my laptop for entertainment and that I basically had nothing that could help me de-stress and relax at home.
*Not knowing when the laptop's spare parts would arrive or if the repair man would give me bad news and even more delays.
*A very poorly timed issue in my relationship.
I know university can be stressful even though it never really affected me before or since but man, those couple of weeks broke me.1 -
One day I decided I wanted to build robots.
And not kidding the reason I wanted to build them was because I wanted someone interesting to talk to and stil not kidding I even fantasized about a robot girlfriend... Lame I know I think I was a lonely little guy back then, though even after 7 years or so it doesn't feel as though it's that long ago. Maybe because things didn't change that much. Which is worrying but it's not the topic so I will pass on that future-past worries bullcrapper. After learning how robots worked and what made them function so things gradually led up to me being more interested in machine learning applications and software. I learned Arduino at first, I think I still have some messy circuits and old arduinos around. I only finished one robot though and it couldn't even support it's own weight. The servo motors were taking too many amps that heated up the little arduino even with a fan attached. Provably I should have made use of mechanics for robots books and calculated things first. But even though it couldn't walk properly I still felt success and I loved it like my own kid (me taking it apart was questionable but believe me). After that I focused more on writing code than using my hands to make things which was a pain in the ass if I might add.
After learning arduino and making that failed project of mine. I then picked up C++ wrote hello world program usual things a starter would do. It was the language I wrote my first game which I finished and this time it worked. But I never released it which was partly because I didn't want to spend a hundred bucks on a license for the engine and I also knew that it was a shit game. If I were to describe; lines in different colors come from the top you need to hit the lines with the same colored columns to break them. The columns changed their height and location on random. The lines sped up and gap between them decreased. Now that I think about it it wasn't half bad. But the code was written in game maker studio's version of C so I have no way to salvage it.
But I learned a lot of things from that project and that was the goal, so I would call it a win. I don't remember but after sometime I switched to python. And I'm glad I did, it's fun to code in which was the main reason I coded in the first place. Fun.
Life happens and time passes,
Now I'm waiting to enter college exams in a few months after hopefully passing them. My goal is to get into computer engineering which will be extremely challenging because it's the highest point department in the university I'm aiming at. But hey if the challenge is great the reward is greater right ? To be honest I'm still not sure about my career path. Too many choices. So I will just let my own road called <millions of similarly random events that are actually caused by deterministic reactions, to affect you and your surroundings leading up to a future which only the Laplace's demon can forsee> guide me. Wish me luck.1 -
Anybody like to rip up CTF (or similar)? I've honestly never done a CTF before, I'd like to give jt a shot. I'll get my ass handed to me because I'm not back up to par on OpSec yet, but I adapt well and when I get into a nice groove I can make shit happen! (I like to think so, anyways haha!!)
I've been in full on dev mode lately and haven't had any time to Hulk Smash for a while... I went to fire up a new Kali live USB today and I couldn't run through the updates like I always have- they changed sooo much and I was pissed because I didn't have ethernet with me. That'll be another day for sure, but I still have my machine with Manjaro armed to the nutsack and back with the BlackArch rep. I def could use a break from the chaos, and getting my ass handed right to me sounds like an awesome time because learning is my favorite thing next to a possible chance at getting to destroy shit.
It's weird, because I'm sort of a n00b but also at the same time I've had computers ripped apart/jammed in my face since every day since I was 9 and Y2K was about to hit the fan lmao!! My hardware/network/layering knowledge is fuckin mint titties, I just can't code like a fuckin madman on the fly. I don't have a "primary" language, because I've been having to work with little bits of several languages for extended periods of time... I can at least find my way around all the dox without much of an issue and have no issue solving the probs I come across which is neat, but until the day comes where I can fuck a gaping hole through my keyboard on the fly like George Hotz during one of his lazy Sunday OpenCV SLAM/Python code streams all jacked up on Herba Mate hahahahaha!!!!
The dude uses fucking VIM and codes faster than anyone I've ever seen on levels of science/math so challenging I almost shit myself inside out when I catch one!!!! The level of respect I have for all my fellow red pills in here is as high as it gets, and that's one of the best parts about being a code junkie- sometimes ya get to cross paths with beastly, out of this world people that teach you so much without even having to explain shit.
If anyone's down, or maybe has some resources for me to check out so I can get my chops up let's make it happen -
I never finished it, but before I was working in the industry, I was coding through a book called Build Your Own AngularJS. My intent was to have piecemeal instruction/example in TDD and code way above the level of complexity I was used to. You essentially build the core of AngularJS in about 900 unit tests with total coverage. 1000pages long, its no walk in the park.
I gave it up when my time was short, and focused on higher level concepts: building apps, learning tools of the trade.
Now that I am getting plenty of exposure to that level, I am thinking my free learning hours may be better spent going down into the complex worlds shown in this book. A couple of things I found there really stayed with me and shaped how I think about problems. It was also very illuminating to see how complex algorithms work “in the wild”. I cant stand learning algorithms in isolation, generally speaking.
Has anyone seen this book? I know the framework itself is older now, but I don’t think that is much relevant for this learning use case.
I only know of one student who completed this. Took him a few months. He is an absolute machine.