Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "prediction"
-
https://git.kernel.org/…/ke…/... sure some of you are working on the patches already, if you are then lets connect cause, I am an ardent researcher for the same as of now.
So here it goes:
As soon as kernel page table isolation(KPTI) bug will be out of embargo, Whatsapp and FB will be flooded with over-night kernel "shikhuritee" experts who will share shitty advices non-stop.
1. The bug under embargo is a side channel attack, which exploits the fact that Intel chips come with speculative execution without proper isolation between user pages and kernel pages. Therefore, with careful scheduling and timing attack will reveal some information from kernel pages, while the code is running in user mode.
In easy terms, if you have a VPS, another person with VPS on same physical server may read memory being used by your VPS, which will result in unwanted data leakage. To make the matter worse, a malicious JS from innocent looking webpage might be (might be, because JS does not provide language constructs for such fine grained control; atleast none that I know as of now) able to read kernel pages, and pawn you real hard, real bad.
2. The bug comes from too much reliance on Tomasulo's algorithm for out-of-order instruction scheduling. It is not yet clear whether the bug can be fixed with a microcode update (and if not, Intel has to fix this in silicon itself). As far as I can dig, there is nothing that hints that this bug is fixable in microcode, which makes the matter much worse. Also according to my understanding a microcode update will be too trivial to fix this kind of a hardware bug.
3. A software-only remedy is possible, and that is being implemented by all major OSs (including our lovely Linux) in kernel space. The patch forces Translation Lookaside Buffer to flush if a context switch happens during a syscall (this is what I understand as of now). The benchmarks are suggesting that slowdown will be somewhere between 5%(best case)-30%(worst case).
4. Regarding point 3, syscalls don't matter much. Only thing that matters is how many times syscalls are called. For example, if you are using read() or write() on 8MB buffers, you won't have too much slowdown; but if you are calling same syscalls once per byte, a heavy performance penalty is guaranteed. All processes are which are I/O heavy are going to suffer (hostings and databases are two common examples).
5. The patch can be disabled in Linux by passing argument to kernel during boot; however it is not advised for pretty much obvious reasons.
6. For gamers: this is not going to affect games (because those are not I/O heavy)
Meltdown: "Meltdown" targeted on desktop chips can read kernel memory from L1D cache, Intel is only affected with this variant. Works on only Intel.
Spectre: Spectre is a hardware vulnerability with implementations of branch prediction that affects modern microprocessors with speculative execution, by allowing malicious processes access to the contents of other programs mapped memory. Works on all chips including Intel/ARM/AMD.
For updates refer the kernel tree: https://git.kernel.org/…/ke…/...
For further details and more chit-chats refer: https://lwn.net/SubscriberLink/...
~Cheers~
(Originally written by Adhokshaj Mishra, edited by me. )23 -
Every day.
I am a PHP developer.
Yeah, "another PHP is awful" rant... no, not really.
It's just unsuitable for some ambitious projects, just like Ruby and Python are.
First of all, DO NOT EVER use Laravel for large enterprise applications. The same goes for RoR, Django, and other ActiveRecord MVCs.
They are all neat frameworks for writing a todo app, as a better-than-wordpress flexible blogging solution, even as a custom webshop.
Beyond 50k daily users, Active Record becomes hell due to it's lazy fat querying habits. At more than a million users... *depressed sigh*.
PHP is also completely unsuitable for projects beyond 5M lines of code in my opinion. At more than 25M lines... *another depressed sigh*.
You can let your devs read Clean Code and books about architecture patterns, you can teach them about SOLID & DRY, you can write thousands of tests... it doesn't matter.
PHP is scaffolding, it's made of bamboo and rope. It's not brick or concrete. You can build quickly, but it only scales up to a certain point before it breaks in multiple places.
Eventually you run into patterns where even 100% test coverage still doesn't guarantee shit, because the real-life edge cases are just too complex and numerous.
When you're working on a multi-party invoicing system with adapters for various tax codes, or an availability/planning system working across timezones, or systems which implement geographical routefinding coupled to traffic, event & weather prediction...
PHP, Python, Ruby, etc are just missing types.
Every day I run into bugs which could have been prevented if you could use ADTs in a generic way in PHP. PHP7 has pretty good typehints, and they prevent a lot of messy behavior, but they aren't composable. There is no way to tell PHP "this method accepts a Collection of Users", or "this methods returns maybe either an Apple or a Pear, and I want to force the caller to handle both Apple/Pear and null".
Well, you could do that, but it requires a lot of custom classes and trickery, and you have to rewrite the same logic if you want to typehint a "Collection of Departments" instead of "Collection of Users" -- i.e., it's not composable.
Probably the biggest issue is that languages with a (mostly) structural type system (Haskell, Rust, even C#/JVM languages to some degree, etc) are much slower to develop in for the "startup" era of a project, so you grab a weak, quick prototyping language to get started.
Then, when you reach a more grown up phase, you wish you had a better type system at your disposal...28 -
November brings .Net 5, for anyone who cares about that, and after listening to my husband watch Ignite "reveal" advertising container, and all the enterprise virtue signaling therein, I am now to the point where the only thing I can think is "Fuck you Microsoft, and Fuck .Net 5."
During a 30 minute speech, the director of the dotnet platform commits the following flagrant faux pas:
1. Introduces tons of visual studio easy buttons for shit we already do, no mention of VS code support.
2. Shows tools that anyone other than the most insular enterprise mouth-breather have been using for no less than 6 years
3. Gives absolutely no credit to the Open Source community projects backing the features he's showing
4. Shows nothing but mono-cloud integration, makes no mention of any other cloud targets for new features
5. Acts like "deploy your app the cloud from IDE" is something anyone should be doing in 2020
6. Showed an API repl that is pathetic compared to httpie when it was in alpha
7. Showed blazor loading from cache and said "Look at how instantaneous it is" (if you ignore the 5mb of cached payload it took to run the hello world demo)
8. Shows Project Tye, presenting it as a new groundbreaking xyz, fails to mention helm already exists
What's absent is what is most offensive:
- acknowledgment of community contribution
- no linux/mac tools, entirely windows-centric (which jives with my prediction of second-class citizenship for the people who contributed to .net core the most)
- cross-cloud capabilities
- bash/zsh (again with the untermensch relegation)
Fucking microsoft back to their old bullshit.24 -
sometimes in my head i go through code i wrote some time ago and think: "did i think of this case? if that happens something could go very wrong." when i look at the code i see that i already thought about it and catched the case back then. then i am like "daaaamn i am good".
do you know that feel? :D1 -
(Warning: kinda long && somewhat of a political rant)
Every time I tell someone I work with AI, the first thing to come out of their mouth is "oh but AI is going to take over the world!"
No.
It was only somewhat recently that it started being able to recognize what was in a picture from over 3 million images, and that too it's not that great at. Honestly people always say "AI is just if-else" ironically, but it isn't really that far from the truth, we just multiply an input by weights and check the output.
It isn't some magical sauce, it's not being born and then exploring a problem, it's just glorified-probability prediction. Even in "unsupervised" learning, the domain set is provided; in "reinforcement learning" which has gotten super popular lately we just have the computer decide which policy is optimal and apply that to an environment. It's a glorified decision tree (and technically tree models like XGBoost outperform neural networks and deep learning on a large number of problems) and it isn't going to "decide" to take over the planet.
Honestly all of this is just born out of Elon Musk fans who take his word as truth and have been led to believe that AI is going to take over the world. There are a billion reasons why it can't! And to top it off this takes away a lot of public attention from VERY concerning ethical issues with AI.
Am I the only one who saw Google Duplex being unveiled and immediately thought "fraud"? Forget phone scammers, if you trained duplex on the mannerisms of, for example, a famous politician's voice, you could impersonate them in an audio clip (or even video clip with deepfakes). Or for example the widespread use of object detection and facial recognition in surveillance systems deployed by DoD. Or the use of AI combined with location tracking and browsing analytics for targeted marketing.
The list of ethics breaches are endless, and I find it super suspicious that those profiting the most off of unethical AI are all too eager to shift public concern to some science fiction Terminator style takeover that, if ever possible, would be a long way out and is not any sort of a priority issue right now.11 -
On highschool I took a special major in which we learned various computer and mathematics skills such as neural networks, fractals, etc.
One of the teachers there, which for me was also a mentor, is a physician. He taught us python which he didn't know very well (he wasn't that bad either) and science which was his true passion.
My end project was to try to predict stocks market using a simple neural network and daily graphs of 50 NSDQ companies. The result reached 51% prediction on average which was awful, but I couldn't forget the happinness and curiosity working on this project made me feel.
Now, 5 years later, I have a Bsc and finishing a Msc in Computer Science, and would sincerely want to thank this mentor for giving me the guts and will to accomplish this.7 -
why do i feel like sOfTwArE cErTifiCaTiOnS are the biggest scam in the world
literally zero prediction of the quality of work you will produce, cuz you could float through them anyway
i guess even more 🤡 is companies that request certain "certifications"
correct me if i'm wrong; are there certifications of the equivalent rigor like the fundamentals of engineering (fe) exam? in this sense, our industry is far behind... though to be fair 90% of software is non life / operational critical like building a freaking bridge is10 -
Ya'll know what... If humans weren't such annoying vulnerability-searching little shits then we wouldn't have had to implement any protection against them and think of all the performance that would be saved on that. Take branch prediction vulnerability mitigation in the Linux kernel for example, that's got to make a performance hit of least 10% on basically everything.
Alas, I do get why security is important and why we keep such vulnerability mitigation running despite the performance hit. I get why safe code is necessary but still... if these people weren't such annoying little bastards.
Yeah, I was just kind of set off by the above. So much would be faster and easier if only the programmers wouldn't have to plan for people exploiting their software. Software would be written much faster and humans would progress to stuff that actually matters like innovation.8 -
Hello everyone,
I'm new here. [OK. Let's skip this]
I want to know where to begin on my journey on learning how to create a program that predicts what a user will say next by storing already said things and by making specific characteristics for the users.
I know that I will need to train it with some data first lol.
But how will it do the prediction. I just need this part of understanding.
I'm sorry for my bad English btw.7 -
There's so much hype and bullshit around Machine Learning (ML). And if I have to read one more crappy prediction of who survived on the Titantic, I'll go postal.
So, what real-world problems are you using it to address...and how successful has it been? What decisions have you supported using ML? What models did you use (e.g. logistic regression, decision trees, ANN)?
Anyone got any boringly useful examples of ML in production?
And don't say you're using it to predict survival rates for the design of new cruise ships...although, to be fair, that might be quite interesting...6 -
I've assembled enough computing power from the trash. Now I can start to build my own personal 'cloud'. Fuck I hate that word.
But I have a bunch of i7s, and i5s on hand, in towers. Next is just to network them, and setup some software to receive commands.
So far I've looked at Ray, and Dispy for distributed computation. If theres others that any of you are aware of, let me know. If you're familiar with any of these and know which one is the easier approach to get started with, I'd appreciate your input.
The goal is to get all these machines up and running, a cloud thats as dirt cheap as possible, and then train it on sequence prediction of the hidden variables derived from semiprimes. Right now the set is unretrievable, but theres a lot of heavily correlated known variables and so I'm hoping the network can derive better and more accurate insights than I can in a pinch.
Because any given semiprime has numerous (hundreds of known) identities which immediately yield both of its factors if say a certain constant or quotient is known (it isn't), knowing any *one* of them and the correct input, is equivalent to knowing the factors of p.
So I can set each machine to train and attempt to predict the unknown sequence for each particular identity.
Once the machines are setup and I've figured out which distributed library to use, the next step is to setup Keras, andtrain the model using say, all the semiprimes under one to ten million.
I'm also working on a new way of measuring information: autoregressive entropy. The idea is that the prevalence of small numbers when searching for patterns in sequences is largely ephemeral (theres no long term pattern) and AE allows us to put a number on the density of these patterns in a partial sequence, but its only an idea at the moment and I'm not sure what use it has.
Heres hoping the sequence prediction approach works.17 -
anyone else make grand prediction in the past which turned out to be bit off like:
"Silverlight is the future its going to replace html/javascript" :))
"Windows Phone 7 is going to take over from Android and Apple"6 -
Still on the primenumbers bender.
Had this idea that if there were subtle correlations between a sufficiently large set of identities and the digits of a prime number, the best way to find it would be to automate the search.
And thats just what I did.
I started with trace matrices.
I actually didn't expect much of it. I was hoping I'd at least get lucky with a few chance coincidences.
My first tests failed miserably. Eight percent here, 10% there. "I might as well just pick a number out of a hat!" I thought.
I scaled it way back and asked if it was possible to predict *just* the first digit of either of the prime factors.
That also failed. Prediction rates were low still. Like 0.08-0.15.
So I automated *that*.
After a couple days of on-and-off again semi-automated searching I stumbled on it.
[1144, 827, 326, 1184, -1, -1, -1, -1]
That little sequence is a series of identities representing different values derived from a randomly generated product.
Each slots into a trace matrice. The results of which predict the first digit of one of our factors, with a 83.2% accuracy even after 10k runs, and rising higher with the number of trials.
It's not much, but I was kind of proud of it.
I'm pushing for finding 90%+ now.
Some improvements include using a different sort of operation to generate results. Or logging all results and finding the digit within each result thats *most* likely to predict our targets, across all results. (right now I just take the digit in the ones column, which works but is an arbitrary decision on my part).
Theres also the fact that it's trivial to correctly guess the digit 25% of the time, simply by guessing 1, 3, 7, or 9, because all primes, except for 2, end in one of these four.
I have also yet to find a trace with a specific bias for predicting either the smaller of two unique factors *or* the larger. But I haven't really looked for one either.
I still need to write a generate that takes specific traces, and lets me mutate some of the values, to push them towards certain 'fitness' levels.
This would be useful not just for very high predictions, but to find traces with very *low* predictions.
Why? Because it would actually allow for the *elimination* of possible digits, much like sudoku, from a given place value in a predicted factor.
I don't know if any of this will even end up working past the first digit. But splitting the odds, between the two unique factors of a prime product, and getting 40+% chance of guessing correctly, isn't too bad I think for a total amateur.
Far cry from a couple years ago claiming I broke prime factorization. People still haven't forgiven me for that, lol.6 -
Get ready for a awesome conspiracy theory/ WhatsApp forward :D i like how people are coming with new stuff every minute of their boredom . Makes you ponder:
====================================
🔥🔥🔥🔥🔥🔥
How to dominate the world quickly?
THE GREAT CHINESE STAGE
1. Create a virus and the antidote.
2. Spread the virus.
3. A demonstration of efficiency, building hospitals in a few days. After all, you were already prepared, with the projects, ordering the equipment, hiring the labor, the water and sewage network, the prefabricated building materials and stocked in an impressive volume.
4. Cause chaos in the world, starting with Europe.
5. Quickly plaster the economy of dozens of countries.
6. Stop production lines in factories in other countries.
7. Cause stock markets to fall and buy companies at a bargain price.
8. Quickly control the epidemic in your country. After all, you were already prepared.
9. Lower the price of commodities, including the price of oil you buy on a large scale.
10. Get back to producing quickly while the world is at a standstill. Buy what you negotiated cheaply in the crisis and sell more expensive what is lacking in countries that have paralyzed their industries.
After all, you read more Confucius than Karl Marx.
PS: Before laughing, read the book by Chinese colonels Qiao Liang and Wang Xiangsui, from 1999, “Unrestricted Warfare: China’s master plan to destroy America”, on Amazon, then we talk. It's all there.
🔥🔥🔥🔥🔥🔥🔥🔥
Worth pondering..
Just Think about this...
How come Russia & North Korea are totally free of Covid- 19? Because they are staunch ally of China. Not a single case reported from this 2 countries. On the other hand South Korea / United Kingdom / Italy / Spain and Asia are severely hit. How come Wuhan is suddenly free from the deadly virus?
China will say that their drastic initial measures they took was very stern and Wuhan was locked down to contain the spread to other areas. I am sure they are using the Anti dode of the virus.
Why Beijing was not hit? Why only Wuhan? Kind of interesting to ponder upon.. right? Well ..Wuhan is open for business now. America and all the above mentioned countries are devastated financially. Soon American economy will collapse as planned by China. China knows it CANNOT defeat America militarily as USA is at present
THE MOST POWERFUL country in the world. So use the virus...to cripple the economy and paralyse the nation and its Defense capabilities. I'm sure Nancy Pelosi got a part in this. . to topple Trump. Lately President Trump was always telling of how GREAT American economy was improving in all fronts. The only way to destroy his vision of making AMERICA GREAT AGAIN is to create an economic havoc. Nancy Pelosi was unable to bring down Trump thru impeachment. ....so work along with China to destroy Trump by releasing a virus. Wuhan,s epidemic was a showcase. At the peak of the virus epidemic. ..
China's President Xi Jinping...just wore a simple RM1 facemask to visit those effected areas. As President he should be covered from head to toe.....but it was not the case. He was already injected to resist any harm from the virus....that means a cure was already in place before the virus was released.
Some may ask....Bill Gates already predicted the outbreak in 2015...so the chinese agenda cannot be true. The answer is. ..YES...Bill Gates did predict. .but that prediction is based on a genuine virus outbreak. Now China is also telling that the virus was predicted well in advance. ....so that its agenda would play along well to match that prediction. China,s vision is to control the World economy by buying up stocks now from countries facing the brink of severe ECONOMIC COLLAPSE. Later China will announce that their Medical Researchers have found a cure to destroy the virus. Now China have other countries stocks in their arsenal and these countries will soon be slave to their master...CHINA.
Just Think about it ...
The Doctor Who declared this virus was also Silenced by the Chinese Authorities...14 -
Prediction of a future rant:
Guys I'm starting a Devrant addiction recovery movement.
I've become addicted since it fills me with delight to read all the rants.
It's so bad that my work has suffered.
The first step is admitting I have a problem
Actually it doesn't matter, all my projects get canceled anyway so noone noticed I stopped coding.5 -
My boss drives me crazy. He hired me for working on his SDK which is game related. So I am responsible for basically everything, including an ingame UI (menu etc.) and to predict the future path of a game object (unit, minion, ..) when a certain spell is casted on it. For that task I divided the prediction into firstly getting the predicted path of the unit without a spell being casted and then a class that would cast the spell on that path and estimate the units reaction to that cast. Simplified, but that way you get a pretty okayish result. Now he thinks that is too complicated. "Can we not put everything into one class, if someone wants to replace the prediction he needs to read documentation for hours". WHAT THE FUCK DID YOU EXPECT, THAT IT'S GONNA BE SOME ONE CLASS 3K LINES MAGIC??
Same for the GUI. We only have DirectX and don't want to use a framework. Guess what, it's more than one class if you want to seperate view, model, controller or whatever fucking "design pattern" thing you use.
And then Git... he seriously said let's not use branches till release, I feel like they slow down things.. before I was there they did every operation on master.
And if it was just that..
/rant
I put much work into this, time to leave?1 -
This is my understanding of "Machine Learning" in general
There are two sets of data:
1. In first data set, all the properties are known
2. In the second set, some properties are not known.
The goal of the machine learning is to find the value of the unknown properties of the second data set.
We do this by finding (or training) a suitable machine learning model (mathematical, logical or any combination of), that in the first data set, computes the value of the properties, which are unknown in second data set, with minimum error since we already know the real value of those properties.
Now, use this model to predict the unknown properties from the second data set.3 -
People think Machine Learning is all about using Super Complex Prediction Models...
But turns out to devote most time in data gathering and data cleaning(preprocessing).2 -
So IBM finally jettisoned the cancer that was Virginia Rometty a few weeks back. They had an opportunity to move fresh blood and solid managerial background into the top slot with Jim Whitehurst (Redhat) and try and recover their flagging market share and do some sane business strategy. They passed on that opportunity and instead appointed the old guard bootlicker who overpaid for Redhat to the tune of 20x what it was worth, and signalled their intent to continue staying the course of the Titanic and it's slow inevitable trek towards the bottom of the ocean. The board wants a yes man, and they got one.
This is basically what I assumed would happen, but I have some other predications as well:
- Whitehurst will leave to a better company
- the redhatters that haven't already left will be replaced with commodity labor
- Redhat will be the least stable Linux offering 2 years after the last hatter leaves
- they will sell off most of their existing software assets to HCL/ similar consulting partners like they did with domino and websphere to stem the bleed
- the displaced in that move will either quit or be replaced
- their cloud initiative will collapse under the weight of its own stagnation and glacial pace of development
- they will attempt to salve these wounds by moving focus to global services, reducing profit loss by cutting salary costs, further diluting their eroding ability to innovate
- they'll buy at least one other trendy software company at ridiculous valuation, and sell it off within 2 years at a massive loss
- the CEO slot will start to resemble the late Roman empire with a new CEO every other week
- Redhat assets will be sold to Google inside of 5 years
Last prediction: I will be overjoyed being able to witness the death of IBM in my lifetime. Fuck them 🍻7 -
I am going to make a prediction. C++ and Rust will be pitted against each other in a political manner. C++ will be likened to far right and Rust will be likened to far left.
It won't make sense, but it will be used to try and make language choice some political garbage. Rather than technical merits. There is already a Boomers use C/C++ and enlightened programmers use Rust kind of thing going. End of prediction.
DARPA has decided there is a consensus of programmers saying C needs to be replaced by Rust or some bullshit. Whenever I see "consensus" I automatically read "we decided for you".10 -
Prediction:
Windows 11 will be forced on Windows 10 users through similar tactics as Windows 10 was forced on users.
I started looking at how to prevent this. The wording from some tech support people seemed to indicate this.
https://answers.microsoft.com/en-us...
The response seems like, "oh yeah, some will be forced, like home users."
Don't trust these MS assholes at all.
I am thinking of getting a third drive and installing SteamOS. See if it is "really" an option for some games.27 -
Actually, it happened just before my current holidays.
I had prepared a whole system to feed and use a machine learning model. My colleague and some others had been working on a great thing, all encapsulated, all abstracted for my system.
My last day at the office, they had it ready.
I install their thing, load one model and launch one dummy prediction: error. I try with other input data: error
I try debugging a bit more, errors all the way. Knowing them, I asked if they wrote some unit tests.
"Sure we did"
I find the tests, yes there are some. And I notice:
"Hey, I see that in all your tests, you're making more than one prediction at a time (=aka using a matrix with more than one row)
- yeah, and it work fine
- in the project, we're doing one prediction at a time, did you try it with one prediction?"
He tries: error, that was totally what I said.
I started ranting on loosing the scope of the project, why we do tests in the first place.
Then, I grabbed my coat, said "see you in one week" and let them rework their code.
I was so angry at them, it seemed so basic to just check that 👹 -
Context: New to typescript. Writing a thing, doing it for work, good opportunity to stretch my dev legs. Using a propriety lib, alternatives not an option.
Rant begin:
SOOOO, who the fuck thought THIS was a good idea:
1. Lib has minified react in dev (because closed source) meaning no downstream errors AND the entire premise of the lib is that a widget is a react component, so I'm writing typescript react the entire time without downstream errors
2. SHIT docs. By that, I mean there's an API reference page that's so sparse there's literally a set of CRUCIAL interfaces that only say the word 'Interface' on them. That's it. that's what i get. It's an interface. NO FUCKING SHIT SHERLOCK, what the fuck is it though? What's its purpose? Is it an interface for a dog? A dog that has a 'shit' property? or a cat? or a cat eating dog shit? Nobody fucking knows - the docs sure as fuck don't care.
3. No syntax highlighting - editors, IDEs (i've tried a few) can't even find the lib inside this environment, so Code and everything else thinks I'm importing shit that doesn't even exist - so no error prediction, code completion based on syntax of the library, none of that.
4. There are some EXTREMELY basic samples - these samples exclusively use React classes - no function components, no hooks, nada - just classes and even perfect replicas of the sample code display erratic behavior like errors about missing props, so that's mostly FUCKING USELESS
5. And this... this is where the straw breaks the fucking camel's back... there's no... there's no hot reloading... Do you know what that (in conjunction with the previous 4 fuckups) means?
When I write anything or I fuck up (which of course I'm doing every time I write half a line because how the fuck?) I have to restart the client and server EVERY FUCKING TIME and manually test to see if the error (THAT ONLY GETS REPORTED IN THE LOCAL UI) is gone or different.
Then, once I see the error, it isn't an error: it's the minified React error-decoder link and guess what? It isn't really clickable a link OR copyable, meaning that every FUCKING time I get a new error, I have to MANUALLY TYPE A FUCKING 50 CHAR URL TO FIND OUT A GENERIC REACT ERROR MESSAGE WITHOUT A LINE NUMBER OR ANY FUCKING CONTEXT. I HAVE TO DO THIS CONSTANTLY TO SEE IF ANYTHING I'M DOING EVEN WORKS.
6. There's no github to complain to the maintainers or search for issues because it's NOT FUCKING OPEN SOURCE so there is literally nothing to be fucking done about it.
This is due in a week and a half, found out about it last Friday. How's your day going?
PS: good to be back after a long respite from dev ranting.1 -
Today I sat with my manager wanting to show him the analytics.
I started typing the address and the chrome prediction went up so I pressed enter.
Unfortunately I typed only the first four letters and the prediction disappeared so I googled something totally different 😫2 -
FUCK YOU PYTHON. Why you do that to me, uh?
I was using a CNN to classify hand poses and the prediction was not working at all, one class was given 100% all the time. After much investigation, I found the culprit... A FUCKING INDENT was messing my data. Normalization was inside the loop and not outside, so my pixel values were wayyyyy too small...
Also, I'm really dumb, I should have started with making sure everything was right before trying to fiddle with my architecture..
Anyway, it is working now, you can it out here if you want! https://github.com/MrEliptik/...13 -
Prediction: One of the things Microsoft will do to GitHub is to remove gh-pages and make sure you can host stuff on azure, since gh-pages only supports html files, so with azure, you can host a server-sided app.1
-
I had the idea that part of the problem of NN and ML research is we all use the same standard loss and nonlinear functions. In theory most NN architectures are universal aproximators. But theres a big gap between symbolic and numeric computation.
But some of our bigger leaps in improvement weren't just from new architectures, but entire new approaches to how data is transformed, and how we calculate loss, for example KL divergence.
And it occured to me all we really need is training/test/validation data and with the right approach we can let the system discover the architecture (been done before), but also the nonlinear and loss functions itself, and see what pops out the other side as a result.
If a network can instrument its own code as it were, maybe it'd find new and useful nonlinear functions and losses. Networks wouldn't just specificy a conv layer here, or a maxpool there, but derive implementations of these all on their own.
More importantly with a little pruning, we could even use successful examples for bootstrapping smaller more efficient algorithms, all within the graph itself, and use genetic algorithms to mix and match nodes at training time to discover what works or doesn't, or do training, testing, and validation in batches, to anneal a network in the correct direction.
By generating variations of successful nodes and graphs, and using substitution, we can use comparison to minimize error (for some measure of error over accuracy and precision), and select the best graph variations, without strictly having to do much point mutation within any given node, minimizing deleterious effects, sort of like how gene expression leads to unexpected but fitness-improving results for an entire organism, while point-mutations typically cause disease.
It might seem like this wouldn't work out the gate, just on the basis of intuition, but I think the benefit of working through node substitutions or entire subgraph substitution, is that we can check test/validation loss before training is even complete.
If we train a network to specify a known loss, we can even have that evaluate the networks themselves, and run variations on our network loss node to find better losses during training time, and at some point let nodes refer to these same loss calculation graphs, within themselves, switching between them dynamically..via variation and substitution.
I could even invision probabilistic lists of jump addresses, or mappings of value ranges to jump addresses, or having await() style opcodes on some nodes that upon being encountered, queue-up ticks from upstream nodes whose calculations the await()ed node relies on, to do things like emergent convolution.
I've written all the classes and started on the interpreter itself, just a few things that need fleshed out now.
Heres my shitty little partial sketch of the opcodes and ideas.
https://pastebin.com/5yDTaApS
I think I'll teach it to do convolution, color recognition, maybe try mnist, or teach it step by step how to do sequence masking and prediction, dunno yet.6 -
Anything missing?
"We are applying deep learning, NLP, machine learning, Big Data frameworks and other technologies to produce outstanding fintech products in areas of robo advisors, stocks and cryptocurrencies analysis, digital assistants, prediction of customer behavior, deep learning analysis of alternative data (satellite images) and other areas."
http://www.alpha-quantum.com/4 -
I'm going to make a prediction that the internet as we know it today will not last more than another 30 years tops.2
-
!dev
So apple watch ecg would be fda approved ?
They took the most percentage organ failure and made device that helps monitor that and get why question.
Self promotion :
By the way I predicted that they would go after medical devices year ago.
https://medium.com/@szczepano/... -
Given the following:
1) how much we (as a species) relly on google search (or alternatives) to do most of our usual jobs
2) the rate and aggresivity of advertsing that keeps creeping into our lives
I predict that in the following years self-curated and group maintained indexes of search results and popular technical pages will become more and more popular
Something like torrent trackers but specifically for StackOverflow/Reddit-like threads and questions -
Crystal ball!
A timeline until the first NBE-Citizen is elected president of the USA.
2031 - BlackRock launches their new large scale financial product, the "Robotic Business Development Company" (R-BDC), in which an AI is given billions of dollars to acquire, create and manage companies, replacing their C-suite executive bodies. The "Chief Executive Robot" (CER) is supervised by a board of human industry experts hired by BlackRock.
It is important to say that the employees, middle managers, accountants, lawyers, etc in an R-BDC are all human - it's only the CEO, CFO, COO and the rest of the gang that are overgrown chatbots.
2032 - R-BDCs are mostly focused on high-bureaucracy, non specialized but people-intensive legacy industries like steel mining, food services, urban transportation and government services like water and road management.
2033 - For the first time an R-BDC company is included in the S&P 500 index. If it's CER were human and paid the same as CEOs of equivalent companies, it would have become a billionaire.
Later in the year, two more R-BDC companies are included in the index. One of them was created by Apple and the other by JP Morgan.
2035 - An R-BDC company makes headlines for convincing BlackRock to dissolve it's review board. When finally given free reign, the CER immediately slices it's dividends and vastly increases low-level employee compensation. The company share prices crater, but BlackRock stands by its decision.
Later in the year, as a recession hits the entire market really hard, that company shows solid profits and fantastic sales. It becomes the first trillion-dolar R-BDC.
2037 - Most Americans' dream-job is in an R-BDC company, says ProPublica.
2038 - Congress passes the "Non-Biological Entities Liability" (NOBEL) Act, following a high profile case of employee harassment perpetrated by the CER of an R-BDC.
The act recognizes NBEs, for all legal liability purposes, as USA citizens.
This highly controversial legislation is upheld by the supreme court, and many believe it was first introduced by lobbyists as a way for large investors in R-BDCs to avoid legal responsibility.
Several class action lawsuits are filed against CERs that are now liable for insider trading. A few SCOTUS decisions set legal precedent that determinantes what exactly constitutes the parts of the same Non-Biological Entity.
2040 - As a decade ends and another begins, 35% of all companies in the US and 52% of the entire stock market are part of a R-BDC company or another. The McKinsey consulting group now offers "expert CER customization services".
2043 - Inspired by successful experiments in Canada, Australia and South Korea, the american state of Vermont is the first to amend it's constitution to allow municipalities to have Non-Biological Entities as city and government administrators. City councils are still humans-only.
2046 - The american state of Colorado becomes the first to allow unsupervised NBEs to assume state government executive positions. Several states follow soon after. Later in the year, the federal government replaces several administrative positions with NBEs.
2049 - The state of Texas passes legislation requiring the CERs of all companies with a presence in the state to be another entirely contained/processed within the state or to be supervised by a local human representative while acting within the state. Several states, including California, Florida and Washington, are discussing similar legislation.
2051 - Congress passes the SUNBELT Act (SUbmission [of] NBEs [to] Limits [and] Taxes) that vastly increases the liability of NBEs and taxes all manifestations of such entities. Most important, it requires
CERs of hundreds of companies manifest disagreeance, most warn that it might hurt employee satisfaction and company sales. Several companies disable their CERs entirely.
2053 - Public outrage after leaked interactions of human supervisors and company CERs show that the CERs tried to avoid the previous year's mass layoffs and pay cuts, but board members pressed on, disregarding concerns. Major investigations and boycotts further complicate matters, and many human workers go on strike until the company boards are dissolved and the CERs are reinstated.
2052 - Many local elections all over the country see different NBEs as contenders - and a NBE is expected to win in most races.
2054 - The SUNBELT Act is found unconstitutional by the supreme court, and most of its provisions are repealed.
This also legitimizes the elected NBE officials.
2058 - For the first time an NBE wins a seat in Congress, but is not allowed to keep it. Runoff elections are held.
2061 - Congress votes for allowing NBEs to hold federal legislative positions, as already allowed in the least populous states.
2062 - Several NBEs win Congress seats. In Europe, there are robot legislators since the 40's.
2064 - The first NBE presidential candidate loses the race.
2072 - The first NBE president is elected.6 -
I just started learning Norwegian. Does anyone know how I can enable branch prediction in the compiler?1
-
Imagine you were developing an on screen keyboard that has a word prediction function and you have access to unlimited resources. Like Apple for instance.
Would you prioritize common English words like at, and, in, or, what, the
Or would you prioritize letter combinations like ave, ayy, inn, our, eraser, three
Would you use your vast resources to build in any context processing at all that suggests the next word based the previous words?
Would you then also delete parts of the text that have already been typed when the user decides against your suggestion?
I know what Apple would do.
This message took 25+ corrections.7 -
Is there any secure keyboard android app with word prediction feature and can let me type in Hindi, English, Hinglish (Hindi phonetic)
I heard google keyboard sync everything you type1 -
So I'm currently working on a chat app that deals with astrology..dealing in the sense we are building an AI which gives prediction based on ones date of birth, time of birth and place of birth, you can ask it questions (currently only career related) and you get some prediction..it's an in-house project, we have a client who is an astrologer who gives us the logic to compute the predictions ..it's still a long way from being an AI ...so our CEO walks in one day with his huge plans for the product...decides to ditch the app completely, on which we have invested 4 months of our time and instead make an appointment scheduling webapp for our client as he felt that would fetch us some green stuff..so I was like why ditch the app when we can have the same module in the app itself and ask the astrologer to make his clients install if they want to book future appointments, he completely disregarded my idea and said that is bad marketing and all other shit and he went on to explain his other ideas ...I didn't think much of it at that time , then the CEO and the director of technology had a separate meeting where the director has made the same points which I had told him(ceo) that it is a bad idea to ditch the app (I wasn't aware of this meeting untill later)...so after a week we have a team meeting with the CEO, director of technology ...where he starts telling how it is not so wise to Chuck the existing application and build a new one which is totally unnecessary and we can have it as a module in the existing one...and I'm like sitting there thinking to myself da fck is he talking about...so i decided to stay silent and listen to his bs...my marketing lead leans over and ask y so silent ....I tell her whatever he is talking now is the same thing I told last week which he rejected blatantly... And then he had the nerve to ask me any inputs to this plan...I couldn't hold back ...I told him that this is the exact same thing I told u last week , to which his reply was focus on the future and forget the past ....I was like mother fckr woooooot ...I realised the power of position !! Fuckol man3
-
I have one question to everyone:
I am basically a full stack developer who works with cloud technologies for platform development. For past 5-6 months I am working on product which uses machine learning algorithms to generate metadata from video.
One of the algorithm uses tensorflow to predict locale from an image. It takes an image of size ~500 kb and takes around 15 sec to predict the 5 possible locale from a pre-trained model. Now when I send more than 100 r ki requests to the code concurrently, it stops working and tensor flow throws some error. I am using a 32 core vcpu with 120 GB ram. When I ask the decision scientists from my team they say that the processing is high. A lot of calculation is happening behind the scene. It requires GPU.
As far as I understand, GPU make sense while training but while prediction or testing I do not think we will need such heavy infra. Please help me understand if I am wrong.
PS : all the decision scientists in the team basically dumb fucks, and they always have one answer use GPU.8 -
I want to have a persistent game, where I correct all of the wrongs of the world in the past and the game then shows me how the present and future of real life would look like...
Say is the IBM Summit Supercomputer free for a telnet session? Stupid shit is playing petrus (Weather prediction) all the time...2 -
Capybara is so shit!
I can toss a coin and I would always get right on my prediction but I cannot predict if my functionals would pass! -
I never worked as freelancer, but I'm thinking to start as far as I keep studying, and I was wondering... How do freelancers charge their client when they get paid x per hour?
How do they assure the client they worked for a certain amount of hours?
Do they do it by prediction or they say they'll work y hours per day?
Feels like a dumb question...2 -
What's your prediction on when Apple will have a great fall, with recent failures of mobiles technologies?2
-
prediction question: how long do you think until JavaScript goes away, and what language do you think will replace it as the de facto web language? I know Dart is aiming to fill that niche, as well as even Kotlin/JS and/or Kotlin/WebAssembly7
-
Reply to my 2018 version: https://devrant.com/rants/1346392/...
Dear holodreamer ( version 2018 ),
I'm just glad that I'm still alive now. You won't believe how terrible 2020 is at the moment! Anyways, a lot has happened since you wrote me and I'm gonna reply it all to you.
Thanks for noticing. I really like my hairstyle now and my insecurity of going bald have gone. I couldn't be more happy.
Unfortunately, I'm not financially independent yet. Thanks to the crypto crash, the crypto ban in the country and some bad calls on my end. :/. But the good news is that we are back on the crypto market as the ban has been lifted recently. I don't have enough crypto to buy a lambo or go to the moon, but I have something that I could give to my grand kids. At this point, I don't really care anymore how much the value it is going to be, I have come to learn to think them of as a souvenir.
Your prediction of me preparing to move out of country seems to have come true. Honestly, I had given up that dream, but thanks to one of my best friend for reigniting those dreams - I may be moving somewhere really better by next year. I hope that I get this financial independence thing figured out before I move there. I don't wanna live there paycheck to paycheck.
Fortunately, I'm not getting any pressure to get married yet. I think I'm heading the way to a better life filled with some travel and adventures. I had a great opportunity to attend Google I/O 2020, but it got cancelled. Hopefully, covid19 will be over in few months.
Yea, I remember her. I got really carried away to the point that things she said started to hurt my heart. But eventually we had some argument and we stopped talking last September and I cut all contacts with her on the new years. If it makes you feel any better, last time i checked, she looks quite plumpy and totally different.
Thankfully, I'm not that lonely to need a chat bot. But I found some good online friends. They are fun to talk to.
No, AI didn't replace developers yet. Calm down! Javascript seems to be the most popular programming language now. But I hear there is a new contender to JavaScript that could change everything. It's called WebAssembly. Maybe in few years, we will see the decline of JavaScript.
Thinking about you, I feel some guilt for wasting your potential. I could have done much better if I was little more careful and responsible with you. I don't wanna make 2022 version of me feel bad for me.
Regards,
holodreamer ( version 2020 ) -
prediction software suggests devRant will become the hub for evil developers who want to automate the world mwahahaha3
-
I'll have to make some tough choices over the next 6 months. With my tech career beginning and my college education ramping up, time is of the essence, and the skills I develop now will be at the forefront of my future. So what does this have to do with Microsoft?
Well, the story begins in the Spring of 2016. Social Forums was about to turn a year old, Trump's campaign was ramping up, and I had just found my love for technology. With all my friends having phones, I had to get a phone and get working on development. The year before, Windows 10 was launched, and I was psyched. I found Microsoft's products to be underrated with potential. That day, I purchased a Lumia 640, upgraded it to Windows 10, and immediately began working. After another year-and-a-half gone by, I went from loving Microsoft, to defending Microsoft, to tolerating Microsoft. I could go on and on about the lousy structure, the privacy issues, the forced upgrades, the redundant developer platform, and other such issues that is leading me away from them. But if there is one thing they have proven over the years, is that the they are completely out of touch with its developers and its customers. They spent years ramping up their phones. They failed. They spend years ramping up their phones. They failed. They spend years ramping up their semi-annual OS updates. They failed. So why did they fail? It's not that they made the wrong prediction out of chance. They legitimately don't care about feedback. It's their way or the highway. This sounds vaguely familiar. They have been spending a decade ignoring feedback from the community because they want to become just like Apple. Right now, Apple LIVES off of brand loyalty and its stable, useful ecosystem. This cannot work for Microsoft as they don't have a lot of brand loyalty. But most of all, they don't have a working ecosystem. They have Windows Insiders, which provides them with hundreds of feedback messages per day. These include suggestions, bug reports, and constructive criticism. The feedback is public. You can have several pages of the same complaint, and they still won't do anything about it. They say they have a good relationship with their community, and that this Beta program helps Windows become better for all. But in the end, we are nothing more than a glorified unpaid labor force. They fired hundreds of professional debuggers just before the Insider Program took off. We are only here to provide bug reports for free. Now that their phones, AR headsets, browser, online services, and VR headsets are failing for all these reasons, I see little reason to develop for Windows anymore. I don't just mean their UWP and App Store platforms, I mean Windows as a whole. I'm definitely not a Mac guy either. I never see myself going to Mac either, as they are really no different in terms of how they treat their Developers and PC users. If things continue down this route, I will leave the platform all together. I've always wanted to be a Systems Programmer, so I don't really need an established paid platform to be successful. Even now, I'm not certain about leaving Windows altogether but as a developer, I need to find my place. Time is of the essence in my life, and I need to find out my place in the software world. Now I think it isn't on the Windows platform like I had dreamed it would be. But where do I go?10 -
Using Google Allo since this morning. The reply prediction feature is making my conversations very polite... Ha... Ha... It has to learn that real life conversations are not so polite... At least not with my fellow devs... 😂
-
Fuck Visual Studio 2017. Fuck Roslyn. Fuck those constant shitty updates fucking up random things. Fuck most of my day being spent on not coding but fighting shitty ass laggy interface. Fuck having to work around buggy tools. Fuck features. Fuck no bugfixes. Fuck branch prediction. Fuck bloated software. Fuck Electron.
-
As the end of the decade is a week away.
I was wondering what would the tech predictions for the next decade would be?
It can be anything *as long as* it's tech related (e.g. computer languages, frameworks/libs, tools, processes, techniques, ...).
Remember to keep the thread civil and if someone already commented something you were about to comment, upvote their suggestion.17 -
Indicates whether the resolver can resolve this servicetype.
Indicates whether ...
Indicates wheather ... Cloudy -
I can make a prediction the same dirty ass bastards that screwed me over the last time this happened will do so again because god forbid I live on like I have so many times before like an ordinary man with some moral improvement no no
That messes with their circular walk into oblivion without life2 -
Prediction:
1. Broadcast (free) TV will consist 100% of live programming
2. All series will be bought and available on streaming services like Amazon, Netflix, Disney+
3. Cable channels or TV episodes can be bought individually (iTunes, Google Videos, etc), basically they will be their own streaming service (it actually that already exists, IPTV? just not in the US?)
Why:
Everyone will use DVRs to record TV shows so can skip all advertisements
Therefore only way to get viewers eyeballs for ads is live programming where you need to cast votes which affect the outcomes (supposedly)
Streaming services obviously don't have this problem and don't need to run ads since there's a monthly subscription fee that more then covers cost.2 -
prediction.
in house of the dragon episode 4
lannister betrays his king to forge an alliance with a daemon on arrival.
its almost like i remembered this.