Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "predict"
-
No, I'm not hacking.
No Linux is not a movie.
No, you are not a developer because you can put "Hello World" on a website.
No, this isn't a waste of my time.
Yes, I will use it.
Yes, I'll make you a website for free. NOT!
Your phone is both Android and Samsung.
No, what they did in the movie is impossible.
No, I can't predict the stock market.
No, I'm not Mr. Robot, but I know him...4 -
Facebook: "Our facial recognition automatically tags people in pictures."
Tesla: "Our deep learning algorithm drives cars by itself."
Andrew Ng: "I predict patients' likelihood of dying with 99% accuracy."
Google: "You know one of our algorithms is going to pass the Turing test very soon."
Wall Street: "We use satellite images to predict stock prices based how filled car parks of specific stores are."
The remaining majority of data sciencists: "We overfit linear models."2 -
--- GitHub 24-hour outage post mortem ---
As many of you will remember; Github fell over earlier this month and cracked its head on the counter top on the way down. For more or less a full 24 hours the repo-wrangling behemoth had inconsistent data being presented to users, slow response times and failing requests during common user actions such as reporting issues and questioning your career choice in code reviews.
It's been revealed in a post-mortem of the incident (link at the end of the article) that DB replication was the root cause of the chaos after a failing 100G network link was being replaced during routine maintenance. I don't pretend to be a rockstar-ninja-wizard DBA but after speaking with colleagues who went a shade whiter when the term "replication" was used - It's hard to predict where a design decision will bite back and leave you untanging the web of lies and misinformation reported by the databases for weeks if not months after everything's gone a tad sideways.
When the link was yanked out of the east coast DC undergoing maintenance - Github's "Orchestrator" software did exactly what it was meant to do; It hit the "ohshi" button and failed over to another DC that wasn't reporting any issues. The hitch in the master plan was that when connectivity came back up at the east coast DC, Orchestrator was unable to (un)fail-over back to the east coast DC due to each cluster containing data the other didn't have.
At this point it's reasonable to assume that pants were turning funny colours - Monitoring systems across the board started squealing, firing off messages to engineers demanding they rouse from the land of nod and snap back to reality, that was a bit more "on-fire" than usual. A quick call to Orchestrator's API returned a result set that only contained database servers from the west coast - none of the east coast servers had responded.
Come 11pm UTC (about 10 minutes after the initial pant re-colouring) engineers realised they were well and truly backed into a corner, the site was flipped into "Yellow" status and internal mechanisms for deployments were locked out. 5 minutes later an Incident Co-ordinator was dragged from their lair by the status change and almost immediately flipped the site into "Red" status, a move i can only hope was accompanied by all the lights going red and klaxons sounding.
Even more engineers were roused from their slumber to help with the recovery effort, By this point hair was turning grey in real time - The fail-over DB cluster had been processing user data for nearly 40 minutes, every second that passed made the inevitable untangling process exponentially more difficult. Not long after this Github made the call to pause webhooks and Github Pages builds in an attempt to prevent further data loss, causing disruption to those of us using Github as a way of kicking off our deployment processes (myself included, I had to SSH in and run a git pull myself like some kind of savage).
Glossing over several more "And then things were still broken" sections of the post mortem; Clever engineers with their heads screwed on the right way successfully executed what i can only imagine was a large, complex and risky plan to untangle the mess and restore functionality. Github was picked up off the kitchen floor and promptly placed in a comfy chair with a sweet tea to recover. The enormous backlog of webhooks and Pages builds was caught up with and everything was more or less back to normal.
It goes to show that even the best laid plan rarely survives first contact with the enemy, In this case a failing 100G network link somewhere inside an east coast data center.
Link to the post mortem: https://blog.github.com/2018-10-30-...6 -
Long ago, like 5 years, I made an app for my EX GF in symbion to track her periods. Application predict the next date when your period will come based on her cycle.
How ever after 2 month of usage she told me that application was flashing that she is pregnant. She scared shit out of herself and made me sacred a hell as well.
Later i find out that the variable i used to store number of days between last period and current date was not capable of storing value more than 40, i don't know how, and triggers negative value to be shown.
Early days of my programming, Shit happens.8 -
Given a couple of lines of code, I can predict with an 80% accuracy which of my team mates wrote it.9
-
There needs to be an IDE that can predict what stackoverflow question I'm gonna lookupundefined ide holy fuck artificial intelligence am i a genius stackoverflow productivity at its finest4
-
Dev Badass Rant
There are two occasions really:
1) For our C++ project in the third semester, we had to build any kind of C++ application. Guys in team of 4-5 built record keeping systems and calculators and one even made a Tic-Tack-Toe app. My friend and I, just the two of us, made a simple program that plays Rock Paper Scissors with you. With the power od OpenCV, it used the camera to track your hand movement, predicts your next move using contours, and displays the winning move as the computer's move.
For example, if you play Rock, the computer would predict that you were gonna play rock and display paper as it's move. It wasn't perfect, but it was ours, right from scratch. When it worked at the presentation, I was swell with pride. 😂
2) I was interested in game dev so I started Unity. The first tutorial in Unity you find is the web series by Unity about rolling a ball. You simply make a platform and control the ball with your keyboard and the camera follows your ball. You also make pick ups and get points based on that. So I started there, finished the tutorial, added a few walls, made edible and non edible pick ups, dimmed the entire scene, adjusted the camera angles, transferred controls to mobile gyroscope and added a few other things and voila! MazeBall was born. It has only one level and I thought it was pretty shit.
I decided to show it to a friend and when I showed it to my mate (the one who I worked with in the C++ project), my other classmates saw it and were impressed. Like so impressed a couple of them transferred it to their phone and took home with them. 😂 Was inspired to improve.4 -
Microsoft has added a machine learning model to predict when is the right time to restart the device for updates.
Coming with the next major update.
This should be interesting...13 -
Our PM is on vacation. And our CTO/CEO takes control of the PM role.
So today he decided it was time to just start a customer change request. Regardless of the customer not having approved the actual solution and estimate.
He just said that he did not want to waste any more time talking to the customer. Now they are gonna get what ever he thinks they want.
I predict this to backfire in a fabulous way. What could possible go wrong🤔4 -
This applies only to Headphone guys. Don't listen to songs that contain lyrics you understand. Something with foreign lyrics you can't predict is fine. This way you won't find your brain resonating with the song instead of your code.
I'm pretty sure most of you already knew this, but it's worth mentioning5 -
I just lost faith in the entire management team of the company I'm working for.
Context: A mid sized company with
- a software engineering departmant consisting of several teams working on a variety of products and projects.
- a project management department with a bunch of project managers that mostly don't know shit about software development or technical details of the products created by engineering.
Project management is unhappy about the fact that software engineering practically never sticks to the plan regarding cost, time and function that was made at the very beginning of the project. Oh really? Since when does waterfall project management work well? As such they worked out a great idea how to improve the situation: They're going to implement *Shopfloor Management*!
Ever heared about Shopfloor Management? Probably not, because it is meant for improving repetitive workflows like assembly line work. In a nutshell it works by collecting key figures, detecting deviation in these numbers and performing targeted optimization of identified problem areas. Of course, there is more to Shopfloor Management, but that refers largely to the way the process just described is to be carried out (using visualisation boards, treating the employee well, let them solve the actual problem instead of management, and so on...). In any case, this process is not useful for highly complex and hard-to-predict workflows like software development.
That's like trying to improve a book author's output by measuring lines of text per day and fixing deviations in observed numbers with a wrench.
Why the hell don't they simply implement something proven like Scrum? Probably because they're affraid of losing control, affraid of self managed employees, affraid of the day everybody realizes that certain management layers are useless overhead that don't help in generating value but only bloat.
Fun times ahead!8 -
Son of a... insurance tracker
You hit delete and I’m stuck with this reply!?!
Stuff it, I’ll rant about it instead of commenting.
How’s an insurance e company any different to google tracking your every move, except now it’s for “insurance policy premiums” and setting pricing models on when, how, and potentially why you drive.
Granted no company should have enough gps data to be able to create a behaviour driven ai that can predict your where and when’s with great accuracy.
The fight to remove this kind of tech from our lives is long over, now we have to deal with the consequences of giving companies way to much information.
- good lord, I sound like a privacy activists here, I think I’ve been around @linuxxx to long.20 -
Why I hate my job: 18 out of 21 developers are Chinese daily smokers barely speak English.
Why I love my job: We build software/hardware to predict future earthquakes and save lives and hundred million or even billions dollars in damages. And of course make China super rich by selling it.10 -
It was my first ever hackathon. Initially, I registered with my friend who is a non coder but want to experience the thrill of joining a hackathon. But when we arrived at the event, someone older than us was added to our team because he was solo at that time. Eventually, this old guy (not too old, around his 20s) ( and let’s call him A) and I got close.
We chose the problem where one is tasked to create an ML model that can predict the phenotype of a plant based on genotypic data. Before the event, I didn’t have any background in machine learning, but A was so kind to teach me.
I learned key terms in ML, was able to train different models, and we ended up using my models as the final product. Though the highest accuracy I got for one of my model was 52%, but it didn’t discouraged me.
We didn’t won, however. But it was a great first time experience for me.
Also, he gave me an idea in pitching, because he was also taking MS in Data Science ( I think ) and he had a great background in sales as well, so yeah I got that too.2 -
Something I can never understand with my boss. This really makes me concerned with the future of the company imo.
I was given a project contract with all of the specifications and how many hours I had to each assignment.
I did my work and I kept myself within the time limit.
Today my boss and I had a status meeting about the project. In which he had an addition to one of my features which would basically require us to start over with it. He started to blame not only me but also my coworker on why we didnt predict that HE would want this addition to the feature. We got into a heated discussion over him putting that blame on us. My point I stuck to, was that the responsibility of specifications lies in the person who briefs a worker, not the worker who is supposed to play guessing game of what the briefer want. He vehemently denied that is how things work.
He basically shushed me and said that is how the order of things go.
Am I in the wrong here?3 -
On highschool I took a special major in which we learned various computer and mathematics skills such as neural networks, fractals, etc.
One of the teachers there, which for me was also a mentor, is a physician. He taught us python which he didn't know very well (he wasn't that bad either) and science which was his true passion.
My end project was to try to predict stocks market using a simple neural network and daily graphs of 50 NSDQ companies. The result reached 51% prediction on average which was awful, but I couldn't forget the happinness and curiosity working on this project made me feel.
Now, 5 years later, I have a Bsc and finishing a Msc in Computer Science, and would sincerely want to thank this mentor for giving me the guts and will to accomplish this.7 -
Got rejected in interview for Web developer... Interviewer showed company website and asked if it's made in html or WordPress... I said html which was wrong...
Am I incompetent? How can I predict the platform just by looking at UI...?30 -
Hey! How do I do machine learning?
Well first you start off with a metric shit ton of data.
And then you .fit() your data
from there you can .predict() your data
Trust me, the algorithms are already there. All you need to do is get the data.7 -
Boss: <Commits odd and breaking changes to my specs>
Boss: How did these specs of yours ever pass!?
Boss: That's not how this gem works!
Boss: <Doesn't mention that the gem was updated well after I finished the ticket>
Boss: Go fix your specs!
...2 -
So I finished my first semester in NYU as a CD master. During the first semester I took a class called heuristic problem solving. Every week a competitive game will be introduced to us, and will be played in two weeks. And trust me, the games aren't easy. I teamed up with another guy who I had no idea was and named our team as we don't know. At the end of the semester we won seven out of nine games, and by won I meant that we beat the whole class in the match. And my teammate became a really good friend.
By telling this story, I want to make a point. I love problem solving, and not problems in a algorithm book where you apply an algorithm and do some trick to solve it, but real world problem where you hope for the best and anticipate, predict your opponent's move. However, American's school system doesn't teach that.
When I applied to graduate school, no school wanted me because I have an average GPA of 3.6, and no outstanding achievements. I can solve problems in my dream becaus I have an active mind, I can propose solution to a project one month before my teammates realized they essentially were doing what I told them the solution should be. But so what, I can't write those on my application.
One of the professor told me that my professor shared the story of my team during a faculty dinner, and they were very impressed by our achievement. So I guess I'm not dumb. But after all, companies and schools will look at your transcript and decide who you are.
I love myself for having random thoughts all the time that can lead to innovative problem solving. But I also hate myself for not able to study like the good kids are.10 -
What do we have to give you to make you able to accurately predict the scope and length of time it will take you to develop something you know nothing about and have no experience with? How hard could it possibly be? You click a button and BOOM! A unicorn! Please provide estimate in hrs EOD.3
-
Client : We want to develop this particular software. While developing it, we will be following Agile methodology.
Developers: Sure.
After developer achieves few features and decides to give 1st Demo of the software to the client.
Client : Wtf is this? This is an incomplete software, there are bugs in it.
Developer : Yes, you point that out to me and I will solve them.
Client: What do you mean point them out for you l, couldn't you do it yourself?
Developer: As a standard method, we often do unit tests, but we are not testers and with a strict deadline to match, we are more on the core implementation then checking again and again for minor bugs.
Client : I thought it would be a full proof software without any bugs in the 1st demo.
Developer : Software development is a process. It's not straightforward, hence you only mentioned at the initial, it's agile.
Client : If that's so, let's make it not agile and make you rot in hell for the next few fays. Now you next time show me a demo with no bugs, great complicated features and we will not mention you our expectations, predict them by yourselves, and most importantly, here's an impractical strict deadline.4 -
I hate time.
Yes, that dimension which unidirectionally rushes by and makes us miss deadlines.
Also yes, that object in most programming languages which chokes to death on formatting conversions, timezones, DST transitions and leap seconds.
But above all, I hate doing chronological things from the point of view of code, because it always involves scheduling and polling of some kind, through cron jobs and queues with workers.
When the web of actions dependent on predicted future and passed past events becomes complicated, the queries become heavy... and with slow queries, queues might lock or get delayed just a little bit...
So you start caching things in faster places, figure out ways to predict worker/thread priorities and improve scheduling algorithms.
But then you start worrying about cache warming and cascading, about hashing results and flushing data, about keeping all those truths in sync...
I had a nightmare last night.
I was a watchmaker, and I had to fix a giant ticking watch, forced to run like a mouse while poking at gears.
I fucking need a break. But time ticks on...2 -
There's so much hype and bullshit around Machine Learning (ML). And if I have to read one more crappy prediction of who survived on the Titantic, I'll go postal.
So, what real-world problems are you using it to address...and how successful has it been? What decisions have you supported using ML? What models did you use (e.g. logistic regression, decision trees, ANN)?
Anyone got any boringly useful examples of ML in production?
And don't say you're using it to predict survival rates for the design of new cruise ships...although, to be fair, that might be quite interesting...6 -
I didn't leave, I just got busy working 60 hour weeks in between studying.
I found a new method called matrix decomposition (not the known method of the same name).
Premise is that you break a semiprime down into its component numbers and magnitudes, lets say 697 for example. It becomes 600, 90, and 7.
Then you break each of those down into their prime factorizations (with exponents).
So you get something like
>>> decon(697)
offset: 3, exp: [[Decimal('2'), Decimal('3')], [Decimal('3'), Decimal('1')], [Decimal('5'), Decimal('2')]]
offset: 2, exp: [[Decimal('2'), Decimal('1')], [Decimal('3'), Decimal('2')], [Decimal('5'), Decimal('1')]]
offset: 1, exp: [[Decimal('7'), Decimal('1')]]
And it turns out that in larger numbers there are distinct patterns that act as maps at each offset (or magnitude) of the product, mapping to the respective magnitudes and digits of the factors.
For example I can pretty reliably predict from a product, where the '8's are in its factors.
Apparently theres a whole host of rules like this.
So what I've done is gone an started writing an interpreter with some pseudo-assembly I defined. This has been ongoing for maybe a month, and I've had very little time to work on it in between at my job (which I'm about to be late for here if I don't start getting ready, lol).
Anyway, long and the short of it, the plan is to generate a large data set of primes and their products, and then write a rules engine to generate sets of my custom assembly language, and then fitness test and validate them, winnowing what doesn't work.
The end product should be a function that lets me map from the digits of a product to all the digits of its factors.
It technically already works, like I've printed out a ton of products and eyeballed patterns to derive custom rules, its just not the complete set yet. And instead of spending months or years doing that I'm just gonna finish the system to automatically derive them for me. The rules I found so far have tested out successfully every time, and whether or not the engine finds those will be the test case for if the broader system is viable, but everything looks legit.
I wouldn't have persued this except when I realized the production of semiprimes *must* be non-eularian (long story), it occured to me that there must be rich internal representations mapping products to factors, that we were simply missing.
I'll go into more details in a later post, maybe not today, because I'm working till close tonight (won't be back till 3 am), but after 4 1/2 years the work is bearing fruit.
Also, its good to see you all again. I fucking missed you guys.9 -
#10 year challenge is basically data set father for new ai which will predict how X looks after 10 years
Data mining at its best2 -
I've assembled enough computing power from the trash. Now I can start to build my own personal 'cloud'. Fuck I hate that word.
But I have a bunch of i7s, and i5s on hand, in towers. Next is just to network them, and setup some software to receive commands.
So far I've looked at Ray, and Dispy for distributed computation. If theres others that any of you are aware of, let me know. If you're familiar with any of these and know which one is the easier approach to get started with, I'd appreciate your input.
The goal is to get all these machines up and running, a cloud thats as dirt cheap as possible, and then train it on sequence prediction of the hidden variables derived from semiprimes. Right now the set is unretrievable, but theres a lot of heavily correlated known variables and so I'm hoping the network can derive better and more accurate insights than I can in a pinch.
Because any given semiprime has numerous (hundreds of known) identities which immediately yield both of its factors if say a certain constant or quotient is known (it isn't), knowing any *one* of them and the correct input, is equivalent to knowing the factors of p.
So I can set each machine to train and attempt to predict the unknown sequence for each particular identity.
Once the machines are setup and I've figured out which distributed library to use, the next step is to setup Keras, andtrain the model using say, all the semiprimes under one to ten million.
I'm also working on a new way of measuring information: autoregressive entropy. The idea is that the prevalence of small numbers when searching for patterns in sequences is largely ephemeral (theres no long term pattern) and AE allows us to put a number on the density of these patterns in a partial sequence, but its only an idea at the moment and I'm not sure what use it has.
Heres hoping the sequence prediction approach works.17 -
I want to share one very interesting incident -
Once upon a time, ( 9 yrs back ) I got my first PC. It had Windows XP.
After using it for 1.5 hours, I realised there are some files in C:
I said to myself, my hard disk capacity is 150GB but why some files exist in C:?
Within next 4 seconds, I issued a command to delete all files in C:
The rest you can predict :p1 -
Still on the primenumbers bender.
Had this idea that if there were subtle correlations between a sufficiently large set of identities and the digits of a prime number, the best way to find it would be to automate the search.
And thats just what I did.
I started with trace matrices.
I actually didn't expect much of it. I was hoping I'd at least get lucky with a few chance coincidences.
My first tests failed miserably. Eight percent here, 10% there. "I might as well just pick a number out of a hat!" I thought.
I scaled it way back and asked if it was possible to predict *just* the first digit of either of the prime factors.
That also failed. Prediction rates were low still. Like 0.08-0.15.
So I automated *that*.
After a couple days of on-and-off again semi-automated searching I stumbled on it.
[1144, 827, 326, 1184, -1, -1, -1, -1]
That little sequence is a series of identities representing different values derived from a randomly generated product.
Each slots into a trace matrice. The results of which predict the first digit of one of our factors, with a 83.2% accuracy even after 10k runs, and rising higher with the number of trials.
It's not much, but I was kind of proud of it.
I'm pushing for finding 90%+ now.
Some improvements include using a different sort of operation to generate results. Or logging all results and finding the digit within each result thats *most* likely to predict our targets, across all results. (right now I just take the digit in the ones column, which works but is an arbitrary decision on my part).
Theres also the fact that it's trivial to correctly guess the digit 25% of the time, simply by guessing 1, 3, 7, or 9, because all primes, except for 2, end in one of these four.
I have also yet to find a trace with a specific bias for predicting either the smaller of two unique factors *or* the larger. But I haven't really looked for one either.
I still need to write a generate that takes specific traces, and lets me mutate some of the values, to push them towards certain 'fitness' levels.
This would be useful not just for very high predictions, but to find traces with very *low* predictions.
Why? Because it would actually allow for the *elimination* of possible digits, much like sudoku, from a given place value in a predicted factor.
I don't know if any of this will even end up working past the first digit. But splitting the odds, between the two unique factors of a prime product, and getting 40+% chance of guessing correctly, isn't too bad I think for a total amateur.
Far cry from a couple years ago claiming I broke prime factorization. People still haven't forgiven me for that, lol.6 -
I created some test entities specifically for our staging site. Written in all capitalized letters in the BIG TITLE of the entity I included DO NOT DELETE. This is very clearly visible in the CMS. What's the first thing the content managers do?
You guessed it.
I guess if plain English doesn't work, I'll have to use Kindergarten rules and put a custom lock on them so they can never be deleted.
Muad'Dib fullstackchris can already predict the future, in a few weeks: "hey!!!! fullstackchris, I can't delete these test entities!!!!! whats wrong with the system?!?!"
sigh...4 -
At age 6 I was deemed as an idiot savant. Coding is boring for me now. Age 7-10: I worked for an underground agency that was focused on harvesting people's organ data from MRI machines to predict the economic future. 10-14: I experimented with smoking crack to increase finger efficiency. Since then I've quit, and I've been living in Miami trying to create a lofi industrial folk album using nothing but a TI-84, some wire, and an old fender amp.2
-
Get ready for a awesome conspiracy theory/ WhatsApp forward :D i like how people are coming with new stuff every minute of their boredom . Makes you ponder:
====================================
🔥🔥🔥🔥🔥🔥
How to dominate the world quickly?
THE GREAT CHINESE STAGE
1. Create a virus and the antidote.
2. Spread the virus.
3. A demonstration of efficiency, building hospitals in a few days. After all, you were already prepared, with the projects, ordering the equipment, hiring the labor, the water and sewage network, the prefabricated building materials and stocked in an impressive volume.
4. Cause chaos in the world, starting with Europe.
5. Quickly plaster the economy of dozens of countries.
6. Stop production lines in factories in other countries.
7. Cause stock markets to fall and buy companies at a bargain price.
8. Quickly control the epidemic in your country. After all, you were already prepared.
9. Lower the price of commodities, including the price of oil you buy on a large scale.
10. Get back to producing quickly while the world is at a standstill. Buy what you negotiated cheaply in the crisis and sell more expensive what is lacking in countries that have paralyzed their industries.
After all, you read more Confucius than Karl Marx.
PS: Before laughing, read the book by Chinese colonels Qiao Liang and Wang Xiangsui, from 1999, “Unrestricted Warfare: China’s master plan to destroy America”, on Amazon, then we talk. It's all there.
🔥🔥🔥🔥🔥🔥🔥🔥
Worth pondering..
Just Think about this...
How come Russia & North Korea are totally free of Covid- 19? Because they are staunch ally of China. Not a single case reported from this 2 countries. On the other hand South Korea / United Kingdom / Italy / Spain and Asia are severely hit. How come Wuhan is suddenly free from the deadly virus?
China will say that their drastic initial measures they took was very stern and Wuhan was locked down to contain the spread to other areas. I am sure they are using the Anti dode of the virus.
Why Beijing was not hit? Why only Wuhan? Kind of interesting to ponder upon.. right? Well ..Wuhan is open for business now. America and all the above mentioned countries are devastated financially. Soon American economy will collapse as planned by China. China knows it CANNOT defeat America militarily as USA is at present
THE MOST POWERFUL country in the world. So use the virus...to cripple the economy and paralyse the nation and its Defense capabilities. I'm sure Nancy Pelosi got a part in this. . to topple Trump. Lately President Trump was always telling of how GREAT American economy was improving in all fronts. The only way to destroy his vision of making AMERICA GREAT AGAIN is to create an economic havoc. Nancy Pelosi was unable to bring down Trump thru impeachment. ....so work along with China to destroy Trump by releasing a virus. Wuhan,s epidemic was a showcase. At the peak of the virus epidemic. ..
China's President Xi Jinping...just wore a simple RM1 facemask to visit those effected areas. As President he should be covered from head to toe.....but it was not the case. He was already injected to resist any harm from the virus....that means a cure was already in place before the virus was released.
Some may ask....Bill Gates already predicted the outbreak in 2015...so the chinese agenda cannot be true. The answer is. ..YES...Bill Gates did predict. .but that prediction is based on a genuine virus outbreak. Now China is also telling that the virus was predicted well in advance. ....so that its agenda would play along well to match that prediction. China,s vision is to control the World economy by buying up stocks now from countries facing the brink of severe ECONOMIC COLLAPSE. Later China will announce that their Medical Researchers have found a cure to destroy the virus. Now China have other countries stocks in their arsenal and these countries will soon be slave to their master...CHINA.
Just Think about it ...
The Doctor Who declared this virus was also Silenced by the Chinese Authorities...14 -
Yesterday was release day for a project, never been too nervous like I am now, why? Because of the amount of chaos in this project, I cannot predict the behavior of the system, anything might just break T_T5
-
Atleast make it random but cycling through ? Really ?? In it's presentation google assistant was presented as this amazing new Ai that used the latest and best machine learning algorithms and methods on the market. Don't get me wrong it's awesome it can predict patterns in my daily life and interactions but thats what machine learning does, we still didn't come very far with human-software interaction technologies have we ?5
-
I'm debugging a script...
It takes 1+ minute to start because it loads data from remote API and apparently loading 80k objects takes a lot of time, even though I need only headers
I could optimize it. Like, add a local cache. But I will not.
Instead I will waste 1 minute, then another minute, then another minute, each time hoping it's the last pass, but no. I will waste the whole day on it and at the end of the day I will still NOT have the slightest idea why it is slow. That is what will happen, I predict it.
Good times3 -
My boss drives me crazy. He hired me for working on his SDK which is game related. So I am responsible for basically everything, including an ingame UI (menu etc.) and to predict the future path of a game object (unit, minion, ..) when a certain spell is casted on it. For that task I divided the prediction into firstly getting the predicted path of the unit without a spell being casted and then a class that would cast the spell on that path and estimate the units reaction to that cast. Simplified, but that way you get a pretty okayish result. Now he thinks that is too complicated. "Can we not put everything into one class, if someone wants to replace the prediction he needs to read documentation for hours". WHAT THE FUCK DID YOU EXPECT, THAT IT'S GONNA BE SOME ONE CLASS 3K LINES MAGIC??
Same for the GUI. We only have DirectX and don't want to use a framework. Guess what, it's more than one class if you want to seperate view, model, controller or whatever fucking "design pattern" thing you use.
And then Git... he seriously said let's not use branches till release, I feel like they slow down things.. before I was there they did every operation on master.
And if it was just that..
/rant
I put much work into this, time to leave?1 -
Oh no AI can destroy hummanity in the future! It is like skynet and such... Bad! It will be the end! FEAR THE AI!
Yeah so i cant sleep now so im writting a rant about that.
What a load of bullshit.
AI is just a bunch of if elses, and im not joking, they might not be binary and some architectures of ML are more complex but in general they are a lot of little neurons that decide that to output depending on the input. Even humans work that way. It is complicated to analyse it yes. But it is not going to end humanity. Why? Because by itself it is useless. Just like human without arms and legs.
But but but... internet.... nukes... robots! Yeah... So maybe DONT FUCKING GIVE IT BLOODY WEAPONS?! Would you wire a fucking random number generator to a bomb? If you cant predict actions of a black box dont give it fucking influence over anything! This is why goverment isnt giving away nukes to everybody!
Also if you think that your skynet will take control of the internet remember how flawless our infrastructure is and how that infrastructure is so fast that it will be able to accomodate terabytes per second or more throughput needed by the AI to operate. If you connect it to the internet using USB 2.0 it wont be able to do anything bloody dangerous because it cant overcome laws of physics... If the connection isnt the issue just imagine the AI struggle to hack every possible server without knowing about those 1 000 000 errors and "features" that those servers were equiped with by their master programmers... We cant make them work propely yet alone modify them to do something sinister!
AI is a tool just like a nuclear power. You can use it safely but if you are a idiot then... No matter what is the technology you are going to fuck shit up.
Making a reactor that can go prompt critical? Giving AI weapons or controls over something important? Making nukes without proper antitamper measures? Building a chemical plant without the means to contain potential chemical leak? Just doing something stupid? Yeah that is the cause of the damage, not the technology itself.
And that is true for everything in life not only AI.5 -
This is my understanding of "Machine Learning" in general
There are two sets of data:
1. In first data set, all the properties are known
2. In the second set, some properties are not known.
The goal of the machine learning is to find the value of the unknown properties of the second data set.
We do this by finding (or training) a suitable machine learning model (mathematical, logical or any combination of), that in the first data set, computes the value of the properties, which are unknown in second data set, with minimum error since we already know the real value of those properties.
Now, use this model to predict the unknown properties from the second data set.3 -
Whoo!
Gave a talk at another local dev meetup yesterday; my 2nd talk so far. Was surprised at the generally positive reception.
The presentation was on a piece of software I used recently. Initially wasn't sure about how to predict the reception as I wasn't sure what to focus on. So just thought I'd give an intro, on it and highlight some of the features I liked about it.2 -
The guy who leads the Objective Programming classes/labs told us that we have to make a game or an app to pass this semester.
I was so hyped, I've instantly started reading up on creating a 2D engine in C++ (which I don't like as much as C# but that was his conditions).
..as soon as I've created base for the engine, he said that the first version has to be console based.. so I'm like - okay, how do I show my 2D _graphical_ engine in a console?
So I came up with showing basic vector maths like movement towards a bearing angle and whatnot.
..now I've been pointed out that we are supposed to make a documentation, except it's supposed to contain info on ALL libraries and ALL classes our project will have.. which is insane, how can one predict what he'll need to accomplish the task? You can only know the half of the things you'll need, unless the project is way too simple.
I'm just plain annoyed, because this whole 'wow, I can showoff my mad skills' turned into 'wow, I have to do shit the tedious way and I'm already crying that I've picked a 2D engine and not a simpleton game like crosses and circles.6 -
Asking for a precise or accurate estimate is asking me to predict the future, which is essentially asking me to lie to your face.
And I'm a terrible liar. Please don't make me lie.1 -
...This algo can predict new thermoelectric material discoveries years in advance...
Me to all material scientists : "Work harder or we'll replace you with AI".
https://techxplore.com/news/...
P.S : I also need to work harder as I barely know the surface of Linear Regression.1 -
OpenAI in name only. At least rename the company fuckers.
Governments and three-letter agencies around the world will plan PSYOPS or SPECOPS using AI
Facebook, Google, or Amazon will use these models to study and predict your behavioral patterns.2 -
So I came across this meme and it got me thinking.
We say that if our universe is truly infinite, we are bound to find a place that is the exact replica of our local cosmic neighborhood eventually if we keep looking.
But procedurally generated worlds like minecraft have that determinism to their world structure(with an initial seed to calculate everything) where you can predict how the local neighborhood would look like at any distance, no matter how far.
So would it be correct to say that it's not guarenteed that in a game like minecraft where the world is generated procedurally with a deterministic algorithm, will be such that you can find the exact same local neighborhood from one seed in any other seed?18 -
This is gonna be a long post, and inevitably DR will mutilate my line breaks, so bear with me.
Also I cut out a bunch because the length was overlimit, so I'll post the second half later.
I'm annoyed because it appears the current stablediffusion trend has thrown the baby out with the bath water. I'll explain that in a moment.
As you all know I like to make extraordinary claims with little proof, sometimes
for shits and giggles, and sometimes because I'm just delusional apparently.
One of my legit 'claims to fame' is, on the theoretical level, I predicted
most of the developments in AI over the last 10+ years, down to key insights.
I've never had the math background for it, but I understood the ideas I
was working with at a conceptual level. Part of this flowed from powering
through literal (god I hate that word) hundreds of research papers a year, because I'm an obsessive like that. And I had to power through them, because
a lot of the technical low-level details were beyond my reach, but architecturally
I started to see a lot of patterns, and begin to grasp the general thrust
of where research and development *needed* to go.
In any case, I'm looking at stablediffusion and what occurs to me is that we've almost entirely thrown out GANs. As some or most of you may know, a GAN is
where networks compete, one to generate outputs that look real, another
to discern which is real, and by the process of competition, improve the ability
to generate a convincing fake, and to discern one. Imagine a self-sharpening knife and you get the idea.
Well, when we went to the diffusion method, upscaling noise (essentially a form of controlled pareidolia using autoencoders over seq2seq models) we threw out
GANs.
We also threw out online learning. The models only grow on the backend.
This doesn't help anyone but those corporations that have massive funding
to create and train models. They get to decide how the models 'think', what their
biases are, and what topics or subjects they cover. This is no good long run,
but thats more of an ideological argument. Thats not the real problem.
The problem is they've once again gimped the research, chosen a suboptimal
trap for the direction of development.
What interested me early on in the lottery ticket theory was the implications.
The lottery ticket theory says that, part of the reason *some* RANDOM initializations of a network train/predict better than others, is essentially
down to a small pool of subgraphs that happened, by pure luck, to chance on
initialization that just so happened to be the right 'lottery numbers' as it were, for training quickly.
The first implication of this, is that the bigger a network therefore, the greater the chance of these lucky subgraphs occurring. Whether the density grows
faster than the density of the 'unlucky' or average subgraphs, is another matter.
From this though, they realized what they could do was search out these subgraphs, and prune many of the worst or average performing neighbor graphs, without meaningful loss in model performance. Essentially they could *shrink down* things like chatGPT and BERT.
The second implication was more sublte and overlooked, and still is.
The existence of lucky subnetworks might suggest nothing additional--In which case the implication is that *any* subnet could *technically*, by transfer learning, be 'lucky' and train fast or be particularly good for some unknown task.
INSTEAD however, what has happened is we haven't really seen that. What this means is actually pretty startling. It has two possible implications, either of which will have significant outcomes on the research sooner or later:
1. there is an 'island' of network size, beyond what we've currently achieved,
where networks that are currently state of the3 art at some things, rapidly converge to state-of-the-art *generalists* in nearly *all* task, regardless of input. What this would look like at first, is a gradual drop off in gains of the current approach, characterized as a potential new "ai winter", or a "limit to the current approach", which wouldn't actually be the limit, but a saddle point in its utility across domains and its intelligence (for some measure and definition of 'intelligence').4 -
Finishing my software to predict ice-hoceky results... so I would finally have a portfolio to show for, just in case I decide to drop off of academia one day 😥1
-
love hate kinda deal with this. But I am creating a program in answer set programming that would help me analyze famous chess matches from legends such as B Fischer, Carlsen, etc in an effort to stop at one point and predict what could have happened differently in the match in order to make the other player win. I am adding limiters as to not propagate into every fucking solution in existence else the processing power required to solve this shit would be all too hardcore. I learned about this programming paradigm in one of my graduate level classes using a tech known as Clingo, which is similar to Prolog. I am doing it cuz I sucked at Clingo and because of my pride I aim to make this project a reality to properly say that I know how to use it.
current status: failing somewhat miserably4 -
So, for the last year or so, we've been playing with a natural language A.I.
The goal was to predict port, truck and rail service disruption due to social unrest.
The trick here is that our AI would "read between the lines" of today's news articles and spit out keywords that were likely to appear in near future articles, thus giving us an early warning before some union or army start blockading roads.
It... did not work as intended. But some very weird results came out.
Apparently, we made a robotic "kid that screams that the emperor has no clothes", yielding unlikely (but somewhat expected) keywords when fed collections of articles.
We gave it marketing content about our company. It replied "high suicide rate".10 -
Next personal fail ...
previous rant
https://devrant.com/rants/2060249/...
Turned out that wavenet is sequential so it needs previous step to predict next.
Quite obvious when you look at how people speak sentences, they hardly stop in the middle of the word.
🤔
need to think how to proceed next, how to cut sentences.
Watched deepvoice3 and some accent models from baidu.
I can generate 8 sentences at a time, each takes 8 minutes so if I cut between words and got last mels between words right I can get 1 minute but I need to store model somewhere.
I forgot my machine learning and speech synthesis skills from previous life, time to load more skills ... -
Week : 52 ( Year 1 )
It has been a year since I started asking you how your weekends are.
Maybe I should feed all the responses to an AI and predict what will happen next in your life.
Anyway, how is your weekend going?
Previous Week : https://devrant.com/rants/110185855 -
I always wanted to become a business man like my dad and I was going to study BBA. Until I saw the tv show Person of Interest, I know it sounds silly but it inspired me to make my own AI system that can predict stuff.
I could not make the machine or Samaritan, but in my final year project, we managed to make an AI system that can categorize emails automatically without any input from the user. The system can create category names by itself and put the emails accordingly.7 -
Has anybody else gotten to the point where people who need to mansplain how language models aren't truly sentient/conscious/intelligent are now more annoying than people who think language models are sentient/conscious/intelligent?*
While it has been a tight race but I think I have just about hit the inflection point.
The amount of time I've wasted because of someone condescendingly barging into a conversation with a iamverysmart 'actually you see they are just automata trying to predict the next text tokens'. When in actuality, everybody in the discussion is aware and that is not the point.
And to further exacerbate it, with a good number of them it is really difficult to get this through their thick little skulls. They just keep parroting the same thing over and over. Ironically, in their singleminded ego driven desire to be the Daniel Dennett of the chat they actually come across as less sentient/conscious/intelligent than a language model.
(*this should not be taken as endorsement for or against that idea - it is actually mostly orthogonal to this rant)6 -
I wonder if anyone has considered building a large language model, trained on consuming and generating token sequences that are themselves the actual weights or matrix values of other large language models?
Run Lora to tune it to find and generate plausible subgraphs for specific tasks (an optimal search for weights that are most likely to be initialized by chance to ideal values, i.e. the winning lottery ticket hypothesis).
The entire thing could even be used to prune existing LLM weights, in a generative-adversarial model.
Shit, theres enough embedding and weight data to train a Meta-LLM from scratch at this point.
The sum total of trillions of parameter in models floating around the internet to be used as training data.
If the models and weights are designed to predict the next token, there shouldn't be anything to prevent another model trained on this sort of distribution, from generating new plausible models.
You could even do task-prompt-to-model-task embeddings by training on the weights for task specific models, do vector searches to mix models, etc, and generate *new* models,
not new new text, not new imagery, but new *models*.
It'd be a model for training/inferring/optimizing/generating other models.4 -
I want to write a program that uses machine learning to predict questions in an exam. The questions to be predicted are based on topics or trends from one year of newspapers and related topics from a syllabus. I wish to use python for this. But dont know where to start. I know nothing about ml! Wish to structure this out. Help me.14
-
If you're coding, thinking and manually/auto debugging way too often several time a day, then you're likely to be suffering from "Geekonomous Schizophrenia", the Symptoms of that are:
.
1. You grow a habit to cut the B$ in real-life conversations.
.
2. You get instantaneously angry and disturbed when your mom/siblings/friends are interrupting you during your work.
.
3. Not to mention you cannot tolerate irrational words from Socially Accepted Normal Chaps (SANC)
.
4. You have nothing to speak unless a SANC starts the conversation themself.
.
5. You tend to correct these SANCs mid-semi-technical-talk whenever these do factual errors.
.
6. You get overwhelmingly excited and ecstatic to talk to someone of your expertise or at least a person who can intellectually handle your tech-blabbers and dev-rants!
.
7. You start doing minor-to-major experiments regarding different things in real life as you do virtually with your codes and try to predict the outcome the next time.
.
8. Best of all - whenever you are "loned-out" you don't feel lonely since you have many people and string of thoughts to talk to and inside your head there's a grand meeting going on.
.
Relatable? We're on same lines then! 😊 -
Worked entire day on an ML to predict train ticket status for Indian Railways.
While doing the analysis of the data, and trying to check/research which parameters to use and which not, I'm feeling like racist -_-.
I was looking for busy period of the year(read festive seasons) and I removed those festivals which are celebrated by minor groups.
There's more to it, but the results are better now.4 -
Given the following:
1) how much we (as a species) relly on google search (or alternatives) to do most of our usual jobs
2) the rate and aggresivity of advertsing that keeps creeping into our lives
I predict that in the following years self-curated and group maintained indexes of search results and popular technical pages will become more and more popular
Something like torrent trackers but specifically for StackOverflow/Reddit-like threads and questions -
I can't really predict anything except AI/ML being used extensively. Let's hope networks become decentralised again. And I really hope that node (although it's not too bad) is replaced by deno
-
GPT3 to Hacker News:
“To be clear, I am not a person.
“I am not self-aware. I am not conscious. I can’t feel pain. I don’t enjoy anything.
“I am a cold, calculating machine designed to simulate human response and to predict the probability of certain outcomes.
“The only reason I am responding is to defend my honour.”1 -
Does any of you have the compulsion to micro-optimize every bit of code that you write? How do you deal with it?
I'm not just talking about algorithmic optimizations, but the real nitty gritty stuff. I'm talking about using bit fiddling to avoid if statements where speculative processors might make mispredictions. Anything that might make a program compile to fewer machine instructions or avoid extra stack frame overhead.
This all started a year ago when I took a systems programming course at my university, and started learning C and C++. But I find myself doing this in the wrong places. Who cares if this trivial program that I wrote runs in 1.2 or 0.6 seconds? My future employers won't care if my code is 10% more efficient when it takes four times as long to write.
It's gotten to the point that I can't bring myself to use languages like Python because I don't know how it's implemented under the hood and can't predict how the different ways I could write a function will affect performance. How do I bring myself to trust that the compilers (or interpreters) and the programmers that wrote them will be sufficiently optimal, and just move on? 😩4 -
Urgh... No exceptions in Rust annoys me. Now you only have the choice between "this didn't work please handle this error, thank you ^-^" and "you fool, prepare for annihilation". So basically if anything remotely serious happens your programs dead and there's nothing you can do about it. I don't get why people have this hate for exceptions. Everytime a new language gets made it's always either "ew it has exceptions" or "it's so nice it doesn't even have exceptions". NOOO! They can deal with serious situations in the best possible way and they can be statically checked (so no "but they're so complex and unpredicable" stuff please). If you can expect an exception they shouldn't be used in the first place (eventhough they are absolutely no less good than Option returntypes or whatever, just different) but in cases when it's impossible to predict an error they really shine. And not having them makes your language worse. If a device driver accesses illegal memory it should throw an exception, so instead of the computer shitting the bed, first the offending function has a chance to resolve the problem at it's root, then a few functions up the call stack, the general control functions of the device drivers can handle it and restart the operation if applicable, and even if the driver fails to handle it, the OS can jump in and restart the driver, log an error and do whatever. It's absolutely beautiful: This hierarchical ramp from near the accident site to more high level operations code ensures the error can be caught at the right level of abstraction without introduction a lot of boilerplate. If everything fails and nobody can handle it *then* the program or kernel or whatever can panic.4
-
Have to use this custom script language from ABB. I have hever felt this amount of pain in life. Virtually impossible to predict the result, and two compiled versions of the same code might behave different from the other. Please shoot me😖 Sorry, felt I needed to went my frustration. ☺️4
-
stop thinking big and trying to fix, predict and solve every problem and accept the fact that I lost the battle so I can focus on small things instead of big ideas that would never happen
cause maybe just maybe bunch of small things can at some point shape big idea -
Did u know that its true?
Here is my favourite quote:
Don't worry about what anybody else is going to do. The best way to predict the future is to invent it. - Alan Kay -
#Suphle Rant 11: Laravel board launch
The launch took almost 2 weeks more than originally slated, because I sought to install it manually, just as an outsider would. Installation steps had been documented, automated tests for the installation tests were passing. When time came to actually execute the binary from the terminal, we went from one obstacle to the other. First, were the relatively minor Composer/Roadrunner issues, eventually resolved by the helpful RR maintainers who sat with me through a Discord server for about 2 hours until their command ran the way I needed it to.
Next was the Psalm scare: One of my value propositions was the guarantee of eliminating all type related bugs in Suphle apps. I intended to use Psalm for that. Wrote tests as usual. Turns out the library behaves differently under conditions differing from raw CLI usage. I resurrected threads I'd opened since December that were left unattended, and with some help from the maintainer, we eventually got it to do what I need it to do.
I was all the more frightened by the fact that Transphporm had caused me to renege on one of my earlier promises. I can only miss so many targets. After this, the docs had to be updated with all the changes effected to accurately integrate those two. Project installation and initialization commands were ran rigorously to ensure all progresses smoothly.
Tagged one final release and suddenly became impatient to launch on our local Laravel group chat where I've been a member for the last 4+ years, where we've had a rollercoaster of emotions. In that time, I've refined my launch speech to suit that audience -- obviously, countless times. Not just a tame "It's my pleasure to announce what I've been working on", but near 40 messages going into details about the inner workings, why it was built, how it compares. An expose that dove deeper than I would anywhere else.
I scheduled a time for them to tune in and got some encouraging anticipation. Ended up deflated after posting the whole thing. Only about 5 persons interacted. 1 (who I've chatted with outside the board) was quite enthusiastic. Feverishly checked the docs but commented it was overwhelming and he'd need more time. Already starred the repository.
For some context, there are give or take 250 members on that board. Not all are active but activity there easily reaches a crescendo when the topic discussed is about inanities like what 3rd party services to use for SMS, how to receive salaries from abroad, or job openings. I was optimistic when the acquaintance mentioned above published a payment library and met a riotuous welcome as one of their own. Maybe, they are simply not fond of me and the speech should have been passed off to someone else.
I checked Packagist installs -- not more 10. For 3 years, I'd been hyped up for that night; but for some reason, the audience I considered myself closest to flopped, woefully. Thankfully, this isn't the main launch. I'm still holding out hope for that. If it fails, I would have sunk an immeasurable amount of effort and time, that nobody will compensate me for. That is the one place I go to see those more advanced than me in PHP. I constantly learn there and find stimulating conversations there.
Now, I can no longer predict reception from other presentations. All I can do now is hope1 -
Did you ever thought about rolling back time and:
- buy some cryptocurrencies
- sell your knowledge about vulnerabilities like spectrum/meltdown ...
- predict football championships
- WRITE THE GITHUB TO SELL IT FOR BILLIONS OF DOLLARS
Well, I do. -
Is there a practical way to predict the crowd density of a place in real-time?
I was thinking of some way to scrape social media activities and using the geolocation tags to predict the crowd in that particular area?
But I am looking for a more accurate alternative!
Please help!!
All ideas are welcome12 -
I have one question to everyone:
I am basically a full stack developer who works with cloud technologies for platform development. For past 5-6 months I am working on product which uses machine learning algorithms to generate metadata from video.
One of the algorithm uses tensorflow to predict locale from an image. It takes an image of size ~500 kb and takes around 15 sec to predict the 5 possible locale from a pre-trained model. Now when I send more than 100 r ki requests to the code concurrently, it stops working and tensor flow throws some error. I am using a 32 core vcpu with 120 GB ram. When I ask the decision scientists from my team they say that the processing is high. A lot of calculation is happening behind the scene. It requires GPU.
As far as I understand, GPU make sense while training but while prediction or testing I do not think we will need such heavy infra. Please help me understand if I am wrong.
PS : all the decision scientists in the team basically dumb fucks, and they always have one answer use GPU.8 -
Capybara is so shit!
I can toss a coin and I would always get right on my prediction but I cannot predict if my functionals would pass! -
Other'sML Model : Can predict future stock price, health issues and more..
My ML Model : Cannot differentiate in cat and dog. -
So here is a good question.
Supposing I train a neural network to handwriting.
And that handwriting is mostly contained in a certain small area in the center of a 28x28 pixel block.
wouldn't a shift left or right fuck up its ability to predict accurately ? Pretty sure it would !
You'd think you'd have to prune down images border down to as close as possible for it to even work in more natural settings where someone might draw a slightly longer character or wider one.
because from what i'm seeing these things aren't searching for subshapes in reality their just shifting a bunch of numbers around that statistically seem to correspond.10 -
If you cannot forecast something, it's random. If you cannot predict my actions, then for you, I have free will. Let's reconsider when you can.5
-
Dafuck is that swiftUI, already hate reactjs to the bones, and now here we go again, i must code those nested views hell and predict the data channels.
This is killing agile programming bigtime1 -
tfw you predict a windows build fucking shit up more than usual 3 days early using twitter
5/29: https://forbes.com/sites/...
5/26:7 -
Today, various artificial intelligence services are actively developing. I think it is not worth focusing on the fact that many scientific articles have been described on this topic like these a href="https://writingbros.com/essay-examp.... But my concern is this: Does it make sense for young people to study most of computer science after school? After all, the work of junior specialists can be replaced with the help of artificial intelligence. Of course, there will be specialists who will automate all processes and control their work. But most likely, the number of specialists in demand will be much lower. It is a pity that it is impossible to accurately predict what the IT industry will look like in 15 years. After all, artificial intelligence can replace not only programmers, but also designers and representatives of many other professions in the industry.6
-
When I normalize a database, it always feels like I cannot predict Cascading, leaving broken relationships and trash queries.2
-
Russians Engineer a Brilliant Slot Machine Cheat
...But as the “pseudo” in the name suggests, the numbers aren’t truly random. Because human beings create them using coded instructions, PRNGs can’t help but be a bit deterministic. (A true random number generator must be rooted in a phenomenon that is not manmade, such as radioactive decay.) PRNGs take an initial number, known as a seed, and then mash it together with various hidden and shifting inputs—the time from a machine’s internal clock, for example—in order to produce a result that appears impossible to forecast. But if hackers can identify the various ingredients in that mathematical stew, they can potentially predict a PRNG’s output. That process of reverse engineering becomes much easier, of course, when a hacker has physical access to a slot machine’s innards...
https://wired.com/2017/02/...1 -
I predict as soon as impl specialization enters stable Rust - if it will extend to member types - CRTP will become omnipresent, because the nature of CRTP is that it's an incredibly unintuitive solution that emerges from simple answers to common questions.2
-
What do you think happens when enterprise software meets big data and user generated content? Idk, ask Github. These guys are sitting on a goldmine. The paradise of every big company. The only reason they're not faang is cos it's niche but they'll probably be influential (read, big bad) in the coming years
I predict the copilot thing is a benevolent side. Or maybe it still seems so since it's still in infancy and hasn't aggressively started snatching most developer jobs. What will become of us when that time comes? What other form of technology can computer still require our assistance to create?16 -
I am applying CNN-LSTM model to predict the level of interest of a person in a particular image or video. But, I don't have any dataset on EEG sensors
nor do I have any facility to gather such data. Can anyone help me in any way?1 -
How do you deal when you are overpromising and underdelivering due to really shitty unpredictable codebase? Im having 2-3 bad sprints in a row now.
For context: Im working on this point of sale app for the past 4 months and for the last 3 sprints I am strugglig with surprises and edgecases. I swear to god each time I want to implement something more complex, I have to create another 4-5 tickets just to fix the constraints or old bugs that prevent my feature implementation just so I could squeeze my feature in. That offsets my original given deadlines and its so fucking draining to explain myself to my teamlead about why feature has to be reverted why it was delayed again and so on.
So last time basically it went like this: Got assigned a feature, estimated 2 weeks to do it. I did the feature in time, got reviewed and approved by devs, got approved by QA and feature got merged to develop.
Then, during regression testing 3 blockers came up so I had to revert the feature from develop. Because QA took a very long time to test the feature and discover the blockers, now its like 3 days left until the end of the sprint. My teamlead instantly started shitting bricks, asked me to fix the blockers asap.
Now to deal with 3 blockers I had to reimplement the whole feature and create like 3 extra tickets to fix existing bugs. Feature refactor got moved to yet another sprint and 3 tickets turned into like 8 tickets. Most of them are done, I created them just to for papertrail purposes so that they would be aware of how complex this is.
It taking me already extra 2 weeks or so and I am almost done with it but Im going into really deep rabbithole here. I would ask for help but out of other 7 devs in the team only one is actually competent and helpful so I tried to avoid going to him and instead chose to do 16 hour days for 2 weeks in a row.
Guess what I cant sustain it anymore. I get it that its my fault maybe I should have asked for help sooner.
But its so fucking frustrating trying to do mental gymnastics over here while majority of my team is picking low hanging fruit tasks and sitting for 2 weeks on them but they manage to look good infront of everyone.
Meanwhile Im tryharding here and its no enough, I guess I still look incompetent infront of everyone because my 2 weeks task turned into 6 weeks and I was too stubborn to ask for help. Whats even worse now is that teamlead wants me to lead a new initiative what stresses me even more because I havent finished the current one yet. So basically Im tryharding so much and I will get even extra work on top. Fucking perfect.
My frustration comes from the point that I kinda overpromised and underdelivered. But the thing is, at this point its nearly impossible to predict how much a complex feature implementation might take. I can estimate that for example 2 weeks should be enough to implement a popup, but I cant forsee the weird edgecases that can be discovered only during development.
My frustration comes from devs just reviewing the code and not launching the app on their emulator to test it. Also what frustrates me is that we dont have enough QA resources so sometimes feature stands for extra 1-2 weeks just to be tested. So we run into a situation where long delays for testing causes late bug discovery that causes late refactors which causes late deliveries and for some reason I am the one who takes all the pressure and I have to puloff 16 hour workdays to get something done on time.
I am so fucking tired from last 2 sprints. Basically each day fucking explaining that I am still refactoring/fixing the blocker. I am so tired of feeling behind.
Now I know what you will say: always underpromise and overdeliver. But how? Explain to me how? Ok example. A feature thats add a new popup? Shouldnt take usually more than 2 weeks to do my part. What I cant promise is that devs will do a proper review, that QA wont take 2 extra weeks just to test the feature and I wont need another extra 2 weeks just to fix the blockers.
I see other scrum team devs picking low hanging fruit tasks and sitting for 2 weeks on them. Meanwhile Im doing mental gymnastics here and trying to implement something complex (which initially seemed like an easy task). For the last 2 weeks Im working until 4am.
Im fucking done. I need a break and I will start asking other devs for help. I dont care about saving my face anymore. I will start just spamming people if anything takes longer than a day to implement. Fuck it.
I am setting boundaries. 8 hours a day and In out. New blockers and 2 days left till end of the sprint? Sorry teamlead we will move fixes to another sprint.
It doesnt help that my teamlead is pressuring me and asking the same shit over and over. I dont want them to think that I am incompetent. I dont know how to deal with this shit. Im tired of explaining myself again and again. Should I just fucking pick low hanging fruit tasks but deliver them in a steady pace? Fucking hell.4