Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "intuition"
-
As a developer, sometimes you hammer away on some useless solo side project for a few weeks. Maybe a small game, a web interface for your home-built storage server, or an app to turn your living room lights on an off.
I often see these posts and graphs here about motivation, about a desire to conceive perfection. You want to create a self-hosted Spotify clone "but better", or you set out to make the best todo app for iOS ever written.
These rants and memes often highlight how you start with this incredible drive, how your code is perfectly clean when you begin. Then it all oscillates between states of panic and surprise, sweat, tears and euphoria, an end in a disillusioned stare at the tangled mess you created, to gather dust forever in some private repository.
Writing a physics engine from scratch was harder than you expected. You needed a lot of ugly code to get your admin panel working in Safari. Some other shiny idea came along, and you decided to bite, even though you feel a burning guilt about the ever growing pile of unfinished failures.
All I want to say is:
No time was lost.
This is how senior developers are born. You strengthen your brain, the calluses on your mind provide you with perseverance to solve problems. Even if (no, *especially* if) you gave up on your project.
Eventually, giving up is good, it's a sign of wisdom an flexibility to focus on the broader domain again.
One of the things I love about failures is how varied they tend to be, how they force you to start seeing overarching patterns.
You don't notice the things you take back from your failures, they slip back sticking to you, undetected.
You get intuitions for strengths and weaknesses in patterns. Whenever you're matching two sparse ordered indexed lists, there's this corner of your brain lighting up on how to do it efficiently. You realize it's not the ORMs which suck, it's the fundamental object-relational impedance mismatch existing in all languages which causes problems, and you feel your fingers tingling whenever you encounter its effects in the future, ready to dive in ever so slightly deeper.
You notice you can suddenly solve completely abstract data problems using the pathfinding logic from your failed game. You realize you can use vector calculations from your physics engine to compare similarities in psychological behavior. You never understood trigonometry in high school, but while building a a deficient robotic Arduino abomination it suddenly started making sense.
You're building intuitions, continuously. These intuitions are grooves which become deeper each time you encounter fundamental patterns. The more variation in environments and topics you expose yourself to, the more permanent these associations become.
Failure is inconsequential, failure even deserves respect, failure builds intuition about patterns. Every single epiphany about similarity in patterns is an incredible victory.
Please, for the love of code...
Start and fail as many projects as you can.30 -
I've had my share of incompetent coworkers. In order of appearance:
1. A full stack dev. This one guy never, and I mean NEVER uses relationships in their tables. No indexing, no keys, nada. Couple of months later he was baffled why his page took ten seconds to load.
2. The same dev as (1). Requirement was to create some sort of "theme" feature for a web app. Hacked it by putting !important all over the place.
3. The same dev again. He creates several functions that if the data exists returns a view, and if it doesn't, "echo '0'". No, not return 0 or return false or anything, but fucking echo. This was PHP. If posted a rant about this a few months ago.
4. Same dev, has no idea what clean code is. No, not just reusable functions, he doesn't even get indenting right. Some functions have 4 spaces, some 2 tabs, some 6 tabs! And this is inside the same function. God wait until he tries Python...
5. Same dev now suggests that he become the PM. GM approves (very small company). Assigns me to travel to a client since they needed "technical assistance about the API". Was actually there to lead a UAT session.
Intermezzo, that guy went from fullstack dev to PM to sales (yes, one who calls clients to offer products) to business development, to product analyst in the span of two years.
After a year and a half there, I quit.
6. New company, a "QA engineer" who also assumes the role as the product owner. Does absolutely no tests other than "functional tests" in which he NEVER produces any form of documentation. Not even a set of test cases. He goes by "intuition".
7. Same guy as (6), hands me requirements for a feature. By "hands me" I mean he did that verbally. No spec documents, no slack chat, no Trello card. I ended up writing it as a card in Trello. Fast forward to the due date, he flips out because that wasn't what he wanted. Showed him the card. He walked away, without thinking of a solution how this mess should be handled.
Despite all this, I really don't want him (6&7) to leave the company. The devs get really stressed out at this job and he does make a really good person to laugh with/at. -
*Doesn't have Internet and bored as hell*
*Starts to program something random with Python*
*Wants to write something to a file, doesn't know how*
*Intuition starts...*
"foo = open('test.txt', 'w')
foo.write('hello\n')
foo.close()"
*Runs program*
*It actually fucking worked*
Tell me something more simpler than Python.13 -
Devs online be like "I started learning to code when I was 2 years old and submitted my first application at 5, since then I've made a few simple apps and pull in 2 million a day, not much but it pays the bills"
So discouraging to come up with a novel idea for a simple product and spend a lot of time just to realize you're absolutely lost and severely lack the knowledge to even produce a working product of any sort. All the while some kid makes something "simple" 10x more complex than what you failed to do, and in like a day nonetheless.
How do people just pick up so much knowledge so quickly? How do they just figure out information they couldn't have possibly known like it's intuition?
Life is hard man.14 -
Psychic readings https://linkedin.com/pulse/... are one of the most mysterious and fascinating areas of the paranormal. This phenomenon has long attracted the attention of both ordinary people and scientists, since it represents the ability to receive information in unusual ways, bypassing the usual five senses.
Psychics, or people with such abilities, claim that they can sense energetic interactions, see objects and events at a distance, read thoughts, obtain information about a person only from his photograph, and so on. One of the most well-known psychic readings is tarot card reading, which allows psychics to predict the future and give advice on decision-making.
There are many theories about how psychic readings work. Some believe that psychics are able to perceive information not only through the usual five senses, but also through the sixth sense - intuition. Others believe that psychic abilities are related to a person's energy fields and aura.
In order to understand this phenomenon, scientists conduct numerous studies and experiments. However, it has not yet been possible to find a scientific explanation for extrasensory abilities. Some experiments show that psychics can detect information that ordinary people cannot see, but this has not yet been scientifically proven.
Many people turn to psychics in search of answers to questions regarding their personal life, career, health and other important aspects. Psychics offer them consultations and help them understand difficult situations, predict the future and help them make important decisions.
However, it is worth remembering that there are many impostors and scammers who try to use the popularity of psychic abilities to deceive. Therefore, it is important to choose trusted specialists and not get hung up on the predictions and advice of psychics, but make decisions independently, based on your own judgment and intuition.
Overall, psychic readings remain a mystery to science and society. Many people are confident in the reality of such abilities, others consider them fiction and deception. However, whether you believe in psychic abilities or not, it is worth recognizing that these paranormal phenomena continue to attract the attention and interest of many people around the world.6 -
Sometimes I feel like a Jedi:
My boss says: Look the app is not sending any notifications.
I just have a feeling that he deactivated notifications in the app settings.
I check that and it was deactivated
Does that count as Jedi skills ? Or just programmer intuition5 -
How I got selected for GSoC'19:
I will describe my journey from detail i.e from the 1st year of the college. I joined my college back in 2017 (July), I was not even aware of Computer Science. What are the different languages of CS, but I had a strong intuition of doing BTech from CSE only?
So yeah I was totally unaware of the computer science stuff, but I had a strong desire to learn it and I literally don’t know why I had this desire. After getting into college, I was learning HTML, Python, and C, also I am really thankful to my friends who really helped me to learn, building logic and making stuff out of it. During the 1st month of joining the college, I got to know what is Open Source, GSoC, Github due to my helpful seniors. But I was not into Open Source during my 1st year of college as I thought it is very difficult to start. In my 1st year, I used to do competitive programming and writing scripts in Python to automate various stuff. I never thought that I would even start doing Open Source development, also in the summer vacations after the 1st year I used to practice programming on HackerRank and learnt an awesome course called Automate the Boring Stuff with Python(which I think is one of the most popular courses for Python) which really helped me to build by Python skills.
Now the 2nd year came, I was totally confused between doing Open Source development or continue with my Competitive programming. But I wanted to know about Open Source development, so I thought to start now will be a good idea. I started attending meetups of OSDC(Open Source Developers Club) which is a hub of my college, which really helped me to know more about Open Source development from my seniors. I started looking for beginner friendly projects in Python on the website Up For Grabs, it’s really helpful for the beginners. So I contributed in a few of them, and in starting it was really tough for me but yeah I continued, which really helped me to at least dive into Open Source. Now I thought to start contributing in any bigger project, which has millions of lines of code which will be really interesting. So I started looking for the project, as I was into web development those days so I thought to find a project which matches my domain. So yeah I finally landed on Oppia:
Oppia
I started contributing into Oppia in November, so yeah in starting it was really difficult for me to solve any issue (as I wasn’t aware of the codebase which was really big), but yeah mentors at Oppia are really helpful, they guided me which really helped me to start my journey with Oppia. By starting of January I was able to resolve around 3–4 issues, which helped me to become the collaborator at Oppia, afterward I really liked contributing to it and I was able to resolve around 9–10 issues by the end of February, which landed me to become a Team Member at Oppia which was really a confidence boost and indication for me that I am in the right direction.
Also in February, the GSoC organizations list was out, and yeah Oppia was also participating in it. The project ideas of Oppia were really interesting, I became even confused to pick anyone because there were 4–5 ideas which seemed interesting to me. After 1–2 days of thought process I decided to go for one of them, i.e “Asking students why they picked a particular answer”, a full stack project.
I started making proposals on it, from the first week of March. I used to get my proposal reviewed frequently from the mentors, which really helped me to build a good and strong proposal.
I must say a well-defined proposal is the most important key for getting selected in GSoC, also you must have done some contributions to the organization earlier which I think really maximize your chances of selection in GSoC.
So after my proposal was made, I submitted it on the GSoC website.
Result Day:
It was the result day, by the way, I had the confidence of being selected, but yeah I was a little bit nervous. All my friends were asking when is your result coming, I told them it will come at 12.30AM (IST). Finally, the time came when I refreshed the GSoC website, Voila the results were out. I opened the Oppia organization page, and yeah my name was there. That was the day I was really happy and satisfied, I was thinking like I have achieved something in my life. It was a moment of pleasure for me, I called my parents and told them my result, they were really happy for me.
I say cracking GSoC is worth it, the preparation you do, the contributions you do, the making of the proposal is really worth.
I got so many messages from my juniors, friends, and seniors, they congratulated me. After that when I uploaded my result of Facebook and LinkedIn, there were tons of comments and likes on the post. So yeah that’s my journey.
By the way, I am writing this post after really late, sorry for it. I must have done it earlier, but due to milestone 1 of GSoC, I was busy.3 -
Not at all.
I’m a dropout. 🤷♂️
My dropping out was due to mental health from a bad relationship and also the realisation that I was failing the math-based portions of the course.
I’ve no doubt had I been better with maths and finished, the course would have been useful, but not the degree itself.
Not having it has never been a real barrier to my finding work, though it did raise eyebrows and require explanation to begin with... now my CV kinda speaks for itself in a way a degree simply doesn’t.
Throw in the fact that most grads can’t code (https://blog.codinghorror.com/why-c...) and employers are starting to wake up to the pointlessness of the degrees.
Real world learning, experience and intuition are *far* more valuable.
I will counterbalance this with the caveat that, if you’re doing things on the very bleeding edge, then a compsci degree beyond undergrad is likely the course you want to forge, I assume there’s no decent substitute for access to the knowledge of experts and the tech / equipment they bring to bear.... just avoid becoming an ivory tower type and you’ll be fine.4 -
Am I the only one who programs from experience and intuition. And nearly can't read a sentence without forgetting what it was about?1
-
A friend approached me with an "unpopular opinion" regarding the worldwide famous intro to Machine Learning course by Andre Ng.
His opinion: "shit is boring AF and so is the teacher"
Honestly, I loved it, i think it is a really good intro to the actual intuition(pun/reference intended) to the area. I specially like how it cuts down the herd in terms of the people that stick with it and the people that don't, as in "math is too hard. All i want is to create A.I" <---- bye Felicia.
Even then, i think that the idea that Andrew Ng is boring is not too far from reality. I love math, i am by no means a natural, but with pen and paper in front of me and google I feel like i can figure out and remember anything, i do it out of sheer obsession and a knack for mathematical challenges. That is what kept me sane through the course. Other than that I find it hard to disagree, even if it was not boring for me.
Anyone here thinks the course was fucking boring as well? As in, the ones that have taken it.8 -
When you're mentally debugging a module and you have an intuition about the point of failure.
So you start mentally tracking variables, going down functions calls, moving from one class to another until finally you reach that one line of code that you feel is getting the wrong parameters.
You substitute in the 10 different variables you have been mentally tracking and find out that...
THE LINE OF CODE IS GETTING THE CORRECT PARAMETERS, AND IS FUNCTIONING AS IT SHOULD.
fuck.1 -
(Yet another rant on TAR commands.)
Whose idea was it to make TAR file listing "tar -t" and not "tar -l"?
How does it make sense? It goes against intuition.
It would have been more logical to make "-t" tarfile instead of "-f", and to make "-l" list.
Obligatory: https://xkcd.com/1168/9 -
I just oversleeped...
Im not a dev yet, i have a contract job at the factory.
I have worked 1month already from 3 that i singned up for.
The worst thing is that i said i need a day off to give specific papers for my university. It was supposed to be today but i moved it tomorrow due to the problem with transport.
Well my superior is propably realy angry right now... On the bright side i will have 2 days off...
I wont get fired (hopefully) because as contract job they should only substracy the daily pay from my monthly salary.
This is my first time that im late for a real job. My intuition says that i should go but i wouldnt bear the shame... If i were to go i would be late minimaly 2 hours. I have no idea what to do... I will propably stay home and lose the daily payment because im not strong enough to bear the shame today. It would be very difficult to get in the company as well. Ahhhhh! Its difficult to make decisions when you are shy, lazy and scared.5 -
My boss (Peter) canceled the meeting for today.
Talking to my coworker:
Me: I had a feeling there would be no meeting.
Coworker: Yeah? What made you think that?
Me: When Peter came to me and said, "There is no meeting today." I had a feeling there would be no meeting.
Coworker: That is some pretty strong intuition you have there. <laughing>
Me: I may have been jumping to conclusions though.
Coworker: <laughing harder>3 -
Sooooo ok ok. Started my graduate program in August and thus far I have been having to handle it with working as a manager, missing 2 staff member positions at work, as well as dealing with other personal items in my life. It has been exhausting beyond belief and I would not really recommend it for people working full time always on call jobs with a family, like at a..
But one thing that keeps my hopes up is the amount of great knowledge that the professors pass to us through their lectures. Sometimes I would get upset at how highly theoretical the items are, I was expecting to see tons of code in one of the major languages used in A.I(my graduate program has a focus in AI, that is my concentration) and was really disappointed at not seeing more code really. But getting the high level overview of the concepts has been really helpful in forcing me to do extra research in order to reconnect with some of the items that I had never thought of before.
If you follow, for example, different articles or online tutorials representing doing something simple like generating a simple neural network, it sometimes escapes our mind how some of the internal concepts of the activity in question are generated, how and why and the mathematical notions that led researchers reach the conclusions they did. As developers, we are sometimes used to just not caring about how sometimes a thing would work, just as long as it works "we will get back to this later" is a common thing in most tutorials, such as when I started with Java "don't worry about what public static main means, just write it up for now, oh and don't worry about what System.out.println() is, just know that its used to output something into bla bla bla" <---- shit like that is too common and it does not escape ML tutorials.
Its hard man, to focus on understanding the inner details of such a massive field all the time, but truly worth it. And if you do find yourself considering the need for higher education or not, well its more of a personal choice really. There are some very talented people that learn a lot on their own, but having the proper guidance of a body of highly trained industry professionals is always nice, my professors take the time to deal with the students on such a personal level that concepts get acquired faster, everyone in class is an engineer with years of experience, thus having people talk to us at that level is much appreciated and accelerates the process of being educated.
Basically what I am trying to say is that being exposed to different methodologies and theoretical concepts helps a lot for building intuition, specially when you literally have no other option but to git gud. And school is what you make of it, but certainly never a waste.2 -
Trying to re-type a massive essay I lost because the app refreshed for some reason. I'll try to keep it short (spoiler: I lied).
Recently, I had a conversation with a couple of non-tech people about AI and the fear of computers making humans obsolete. I have some strong (borderline ranty) opinions about this, and thought I'd post here to see what reaction is get.
This is not a "machines will destroy us" post, it's more about the very legitimate great of losing jobs.
- AI is a tool. It's main use would to be help optimise the more complex routine tasks and free up people's time to be more creative in their jobs. Basically, it's the next step of automation.
- Human intuition can never be replaced. Sometimes, things just seem a bit off. Sure, an AI would avoid ever getting in that situation, but only if it had learnt it in the past. A human will always have to be at the helm of any such system.
- Achieving true intelligence and sentience is like trying to travel at the speed of light. The closer you get, the more challenges you face.
- Getting hyped by sensationalist news that claims the end is nigh because two computers optimised the language they used to communicate when trying to reach a goal is stupid. All this shows is that the tech is working as expected and the systems can optimise on the fly. To me, this was a pretty awesome moment.
Now, I'm not saying dystopia is impossible, neither am I saying that it is inevitable. Just like any tool presented to us, if we use it responsibly, we can make life and society a lot better.5 -
Reading through one of my posts I’ve realized how much ego programmers can actually have. Guys, some of you have already mastered or grasped more than just the foundations of the industry standard languages, as well as developed a very solid intuition behind some design patterns and a solid understanding of some frameworks and libraries, say NumPy, say React... we get it.
You don’t have to be such condescending assholes and be offended by some of the jokes we, programming beginners, make to release stress or just to have fun.
You already have some amazing developer and engineering skills. Do not ruin it with such a detrimental attitude; I make this post because I myself have made this mistake, and I still do to this day. But if what I’ve felt reading your comments is what non-programming people feel when around me, I wouldn’t be surprised if I found that some people hated me or just wanted to kill me.
I don’t know if this will get downvot’d or if more people think like this. But I needed to share this, even just as a reflection of my very own attitude.
Thank you for your time,
D.6 -
I can’t control my thoughts.
When someone says “wrap your head around” something, I imagine it. It happens every time.
It’s always 50/50. The one times the head of the person inside my head turns into a play-doh kind of sausage that wraps around a random object, usually a cube, and his face looks confused. It’s hard to separate his head from his neck and it terrifies me.
The other times the head appears extremely solid and has an overall round shape, then I subconsciously try to forcefully wrap it around that object but it doesn’t work and that person screams. It terrifies me even more.
Thoughts like this haunt me through my life. I hate it but I also somehow feel like I’ll miss it if they’re gone and at the same time I can’t decide whether it’s like a Stockholm syndrome towards that terrifying thoughts that are somehow both so personal yet so alien, or just my intuition lies to me again. Both of those possible reasons scare me even more.
My intuition is very valuable to me, I value it the same as I value the freedom of thought – above everything else. Those situations compromise both. Intuition is a major decision-making instrument to me, so terrible things will happen if I couldn’t trust it.
I don’t know what exactly I did wrong to become like this and I can’t remember when it all started7 -
So while exploring some new ideas, I decided to figure out if I could use variables in the known set to determine the bounds of variables in the unknown set.
The variables in question are algebraic identities derived from the semiprimes, so you already know where this is going.
The existing known set is 1194 identities.
And there are, if I recall, roughly two dozen unknowns.
Many knowns have the unknowns as their factors. The d4 product set for example is composed of variables d4a, d4u, d4z, d4z9, d4z4, d4alpha, d4theta, d4omega, etc.
The component variables themselves are unknown, just their products are known. Anyway.
What I've found interesting is if you know the minimum of some of these subsets, for example d4z is smallest out of the d4's for some semiprimes, then you know the upperbound of both the component variables d4 and z.
Unless of course either of them is < 1.
So the order of these variables, based on value, changes depending on the properties of the semiprime, which I won't get into. Most of the time the order change is minor, but for some variables they can vary a lot between semiprimes, rapidly shifting their rank in the known set. This makes it hard to do anything with them.
And what I found myself asking, over and over again, was if there was a way to lock them down? Think of it like a giant switch board, where flipping one switch lights up N number of others, apparently at random. But flipping some other switch completely alters how that first switch works and what lights it seemingly interacts with. And you have a board of them thats 1194^2 in total. So what do you do?
I'd had a similar notion a while back, where I would measure relative value in the known set, among a bunch of variables, assign a letter if the conditions were present, and generate a string, called a "haplotype."
It was hap hazard and I wrote a lot of code to do filtering, sorting, and set manipulation to find sets of elements in common, unique elements, etc. But the 'type' strings, a jumble of random letters, were only useful say, forty percent of the time. For example if a semiprime had a particular type starting with a certain series of letters, 40% of the time a certain known variable was guaranteed to be above a certain variable from the unknown set...40%~ of the time.
It was a lost cause it seemed.
But I returned to the idea recently and revamped the entire notion.
Instead what I would approach it from a more complete angle.
I'd take two known variables J and K, one would be called the indicator, and the other would be the 'target'.
Two other variables would be the 'component' variables (an element taken from the unknown set), and the constraint variable (could be from either the known or unknown set).
The idea was that relationships between the KNOWN variables (an indicator and a target variable) could be used to indicate the rank relationship between the unknown component variable and the constraint variable.
You'd think this wouldn't work either, but my intuition was there were so many seemingly 'random' rank changes of variables in the known set for any two semiprimes, that 1. no two semiprimes ever shared the same order for every variable, and 2. the order of the known variables had to be leaking information about the relationships of the unknown variables.
It turns out my intuition was correct.
Imagine you are picking a lock, and by knowing the order and position of the first two pins, you are able to deduce the relative position of two pins further back that you can't reach because of the locks security features. It doesn't let you unlock the lock directly, but by knowing this, if you can get past the lock's security features, you have a chance of using information about the third pin to get a better, if incomplete, understanding about the boundary position of the last pin.
I would initiate a big scoring list, one for each known element or identity. And then I would check it in tandem like so:
if component > constraint and indicator > target:
indicator[j]+= 1
This is a simplication, but the idea was to score ALL such combination of relationship, whether the indicator was greater than the target at the same time a component was greater than a constraint, or the opposite.
This worked out to four if checks and four separate score lists.
And by subtracting one scorelist from another, I could check for variables that were a bad fit: they'd have equal probability of scoring for example, where they were greater than the target one time, and then lesser than it for another semiprime.
So for any given relationship, greater or lesser between any unknown variable and constraint variable, I could find any indicator variable and target variable whose relationship strongly correlated to the unknown's.18 -
Ok, so I need some clarity from you good folk, please.
My lead developer is also my main mentor, as I am still very much a junior. He carved out most of his career in PHP, but due to his curious/hands-on personality, he has become proficient with Golang, Docker, Javascript, HTML/CSS.
We have had a number of chats about what I am best focusing on, both personally and related to work, and he makes quite a compelling case for the "learn as many things as possible; this is what makes you truly valuable" school of thought. Trouble is, this is in direct contrast to what I was taught by my previously esteemed mentor, Gordon Zhu from watchandcode.com. "Watch and Code is about the core skills that all great developers possess. These skills are incredibly important but sound boring and forgettable. They’re things like reading code, consistency and style, debugging, refactoring, and test-driven development. If I could distill Watch and Code to one skill, it would be the ability to take any codebase and rip it apart. And the most important component of that ability is being able to read code."
As you can see, Gordon always emphasised language neutrality, mastering the fundamentals, and going deep rather than wide. He has a ruthlessly high barrier of entry for learning new skills, which is basically "learn something when you have no other option but to learn it".
His approach served me well for my deep dive into Javascript, my first language. It is still the one I know the best and enjoy using the most, despite having written programs in PHP, Ruby, Golang and C# since then. I have picked up quite a lot about different build pipelines, development environments and general web development as a result of exposure to these other things, so it isn't a waste of time.
But I am starting to go a bit mad. I focus almost exclusively on quite data intensive UI development with Vue.js in my day job, although there is an expectation I will help with porting an app to .NET Core 3 in a few months. .NET is rather huge from what I have seen so far, and I am seriously craving a sense of focus. My intuition says I am happiest on the front end, and that focusing on becoming a skilled Javascript engineer is where I will get the biggest returns in mastery, pay and also LIFE BALANCE/WELLBEING...
Any thoughts, people? I would be interested to hear peoples experiences regarding depth vs breadth when it comes to the real world.8 -
Every layout goal must take hours of frustrating intuition-destroying trial and error, followed by documentation cross-examination, MRE building, upstream bug-filing, and workaround pursuits.
https://jsfiddle.net/uz5dr8h4/21/
But no, CSS doesn't suck, you're just bad at it.6 -
I tried to sort out a basic Multi layer neural network last night....by hand, just to prove that I was able to do the math by myself and understand that I have the intuition in control rather than just rely on Tensorflow or Pytorch to do shit for me.
I stayed up till 3 in the morning and woke up having nothing but dreams about the endeavor. Shitty part is that i couldn't stop dreaming about partial derivatives and how shit it was that I sucked at them in HS and uni. I get them now, but fuck I just feel that I could have done so much better at uni instead of passing my math classes with 80% to 90% of the grade. I feel as if I was slacking all thanks to being damn near mathematically dyslexic3 -
Hmm I wish I had some intuition when it comes to software architecture I guess. Being able to pick the right patterns and understanding what I'm doing.1
-
I‘m curious what you guys think about puzzle games with timers.
I personally hate it to be pressured by the timer. And I hate it when the puzzle resets and I need to start from scratch because I ran out of time.
I prefer to take my time and think about the next move rather than rapid fire my moves by intuition and hoping to get lucky.
Yet so many puzzle games have timers. Is this just lazy design? Do you like timers? What do you think about this?12 -
After learning a bit about alife I was able to write
another one. It took some false starts
to understand the problem, but afterward I was able to refactor the problem into a sort of alife that measured and carefully tweaked various variables in the simulator, as the algorithm
explored the paramater space. After a few hours of letting the thing run, it successfully returned a remainder of zero on 41.4% of semiprimes tested.
This is the bad boy right here:
tracks[14]
[15, 2731, 52, 144, 41.4]
As they say, "he ain't there yet, but he got the spirit."
A 'track' here is just a collection of critical values and a fitness score that was found given a few million runs. These variables are used as input to a factoring algorithm, attempting to factor
any number you give it. These parameters tune or configure the algorithm to try slightly different things. After some trial runs, the results are stored in the last entry in the list, and the whole process is repeated with slightly different numbers, ones that have been modified
and mutated so we can explore the space of possible parameters.
Naturally this is a bit of a hodgepodge, but the critical thing is that for each configuration of numbers representing a track (and its results), I chose the lowest fitness of three runs.
Meaning hypothetically theres room for improvement with a tweak of the core algorithm, or even modifications or mutations to the
track variables. I have no clue if this scales up to very large semiprime products, so that would be one of the next steps to test.
Fitness also doesn't account for return speed. Some of these may have a lower overall fitness, but might in fact have a lower basis
(the value of 'i' that needs to be found in order for the algorithm to return rem%a == 0) for correctly factoring a semiprime.
The key thing here is that because all the entries generated here are dependent on in an outer loop that specifies [i] must never be greater than a/4 (for whatever the lowest factor generated in this run is), we can potentially push down the value of i further with some modification.
The entire exercise took 2.1735 billion iterations (3-4 hours, wasn't paying attention) to find this particular configuration of variables for the current algorithm, but as before, I suspect I can probably push the fitness value (percentage of semiprimes covered) higher, either with a few
additional parameters, or a modification of the algorithm itself (with a necessary rerun to find another track of equivalent or greater fitness).
I'm starting to bump up to the limit of my resources, I keep hitting the ceiling in my RAD-style write->test->repeat development loop.
I'm primarily using the limited number of identities I know, my gut intuition, combine with looking at the numbers themselves, to deduce relationships as I improve these and other algorithms, instead of relying strictly on memorizing identities like most mathematicians do.
I'm thinking if I want to keep that rapid write->eval loop I'm gonna have to upgrade, or go to a server environment to keep things snappy.
I did find that "jiggling" the parameters after each trial helped to explore the parameter
space better, so I wrote some methods to do just that. But what I wouldn't mind doing
is taking this a bit of a step further, and writing some code to optimize the variables
of the jiggle method itself, by automating the observation of real-time track fitness,
and discarding those changes that lead to the system tending to find tracks with lower fitness.
I'd also like to break up the entire regime into a training vs test set, but for now
the results are pretty promising.
I knew if I kept researching I'd likely find extensions like this. Of course tested on
billions of semiprimes, instead of simply millions, or tested on very large semiprimes, the
effect might disappear, though the more i've tested, and the larger the numbers I've given it,
the more the effect has become prevalent.
Hitko suggested in the earlier thread, based on a simplification, that the original algorithm
was a tautology, but something told me for a change that I got one correct. Without that initial challenge I might have chalked this up to another false start instead of pushing through and making further breakthroughs.
I'd also like to thank all those who followed along, helped, or cheered on the madness:
In no particular order ,demolishun, scor, root, iiii, karlisk, netikras, fast-nop, hazarth, chonky-quiche, Midnight-shcode, nanobot, c0d4, jilano, kescherrant, electrineer, nomad,
vintprox, sariel, lensflare, jeeper.
The original write up for the ideas behind the concept can be found at:
https://devrant.com/rants/7650612/...
If I left your name out, you better speak up, theres only so many invitations to the orgy.
Firecode already says we're past max capacity!5 -
I had the idea that part of the problem of NN and ML research is we all use the same standard loss and nonlinear functions. In theory most NN architectures are universal aproximators. But theres a big gap between symbolic and numeric computation.
But some of our bigger leaps in improvement weren't just from new architectures, but entire new approaches to how data is transformed, and how we calculate loss, for example KL divergence.
And it occured to me all we really need is training/test/validation data and with the right approach we can let the system discover the architecture (been done before), but also the nonlinear and loss functions itself, and see what pops out the other side as a result.
If a network can instrument its own code as it were, maybe it'd find new and useful nonlinear functions and losses. Networks wouldn't just specificy a conv layer here, or a maxpool there, but derive implementations of these all on their own.
More importantly with a little pruning, we could even use successful examples for bootstrapping smaller more efficient algorithms, all within the graph itself, and use genetic algorithms to mix and match nodes at training time to discover what works or doesn't, or do training, testing, and validation in batches, to anneal a network in the correct direction.
By generating variations of successful nodes and graphs, and using substitution, we can use comparison to minimize error (for some measure of error over accuracy and precision), and select the best graph variations, without strictly having to do much point mutation within any given node, minimizing deleterious effects, sort of like how gene expression leads to unexpected but fitness-improving results for an entire organism, while point-mutations typically cause disease.
It might seem like this wouldn't work out the gate, just on the basis of intuition, but I think the benefit of working through node substitutions or entire subgraph substitution, is that we can check test/validation loss before training is even complete.
If we train a network to specify a known loss, we can even have that evaluate the networks themselves, and run variations on our network loss node to find better losses during training time, and at some point let nodes refer to these same loss calculation graphs, within themselves, switching between them dynamically..via variation and substitution.
I could even invision probabilistic lists of jump addresses, or mappings of value ranges to jump addresses, or having await() style opcodes on some nodes that upon being encountered, queue-up ticks from upstream nodes whose calculations the await()ed node relies on, to do things like emergent convolution.
I've written all the classes and started on the interpreter itself, just a few things that need fleshed out now.
Heres my shitty little partial sketch of the opcodes and ideas.
https://pastebin.com/5yDTaApS
I think I'll teach it to do convolution, color recognition, maybe try mnist, or teach it step by step how to do sequence masking and prediction, dunno yet.6 -
"Good design begins with honesty, asks tough questions, comes from collaboration and from trusting your intuition." - Freeman Thomas
-
!rant
Going through my graduate program I have come to realize that there is more to A.I than just machine learning algorithms. As if ML was not complicated enough, we add more to it such as KRR and other topics that border on the areas of Cognitive Science, Boolean Algebra, Logic and even Philosophy and you know what? I dig it. I dig it because finding some of the information in the course that I am getting is damn near impossible to see in other items. Such is the case as a method for fucking signature unit propagation which afuckingparently was developed by one of my instructors(not complaining, just really fucking impressed)
The thing is, most of these items would normally have a parallel in software development that we use on our day to day basis, all of us, no matter if you do web, systems development, database development whatever, the general concepts are the same: you represent real world concepts, such as that of logic and knowledge in programatic/mathematical representations.
I am really amazed at the content of these items, I really am. I just wish for some clarification on ambiguity, seems like most things are left better if it where explained in a programmer's point of view. Most of the items that I have seen could have easily been summarized in a programmers logic if only they would have preferred to take the time to do it, and I get that there needs to be mathematical intuition formulated before anything, it is better sometimes to learn concepts from an outside point of view, a mathematical point of view, but shit is just strange sometimes.1 -
How do you deal with relatively complex Boolean logic requirements?
Here's a simple example, of which I missed 50% of the cases because it was non-intuitive to me:
A year is a leap year if:
- it is divisible by 4
- except it is also divisible by 100
- unless it is also divisible by 400
To my intuition, the logic tree is as follows:
if (year % 4 == 0) -> true
if (year % 100 == 0) -> false
if (year % 400 == 0) -> true
so I ended up with 3 cases and I initially missed all the others until I started coding.
The full solution is:
if(year % 4 === 0) {
if(year % 100 === 0) {
if(year % 400 === 0) {
true
} else {
false
}
false
} else {
true
}
true
} else {
false
}
}
I don't like it when I don't immediately see all logic paths.19 -
"I make all my decisions on intuition. But then, I must know why I made that decision. I throw a spear into the darkness. That is intuition. Then I must send an army into the darkness to find the spear. That is intellect." - Ingmar Bergman4
-
The hype of Artificial Intelligence and Neutral Net gets me sick by the day.
We all know that the potential power of AI’s give stock prices a bump and bolster investor confidence. But too many companies are reluctant to address its very real limits. It has evidently become a taboo to discuss AI’s shortcomings and the limitations of machine learning, neural nets, and deep learning. However, if we want to strategically deploy these technologies in enterprises, we really need to talk about its weaknesses.
AI lacks common sense. AI may be able to recognize that within a photo, there’s a man on a horse. But it probably won’t appreciate that the figures are actually a bronze sculpture of a man on a horse, not an actual man on an actual horse.
Let's consider the lesson offered by Margaret Mitchell, a research scientist at Google. Mitchell helps develop computers that can communicate about what they see and understand. As she feeds images and data to AIs, she asks them questions about what they “see.” In one case, Mitchell fed an AI lots of input about fun things and activities. When Mitchell showed the AI an image of a koala bear, it said, “Cute creature!” But when she showed the AI a picture of a house violently burning down, the AI exclaimed, “That’s awesome!”
The AI selected this response due to the orange and red colors it scanned in the photo; these fiery tones were frequently associated with positive responses in the AI’s input data set. It’s stories like these that demonstrate AI’s inevitable gaps, blind spots, and complete lack of common sense.
AI is data-hungry and brittle. Neural nets require far too much data to match human intellects. In most cases, they require thousands or millions of examples to learn from. Worse still, each time you need to recognize a new type of item, you have to start from scratch.
Algorithmic problem-solving is also severely hampered by the quality of data it’s fed. If an AI hasn’t been explicitly told how to answer a question, it can’t reason it out. It cannot respond to an unexpected change if it hasn’t been programmed to anticipate it.
Today’s business world is filled with disruptions and events—from physical to economic to political—and these disruptions require interpretation and flexibility. Algorithms alone cannot handle that.
"AI lacks intuition". Humans use intuition to navigate the physical world. When you pivot and swing to hit a tennis ball or step off a sidewalk to cross the street, you do so without a thought—things that would require a robot so much processing power that it’s almost inconceivable that we would engineer them.
Algorithms get trapped in local optima. When assigned a task, a computer program may find solutions that are close by in the search process—known as the local optimum—but fail to find the best of all possible solutions. Finding the best global solution would require understanding context and changing context, or thinking creatively about the problem and potential solutions. Humans can do that. They can connect seemingly disparate concepts and come up with out-of-the-box thinking that solves problems in novel ways. AI cannot.
"AI can’t explain itself". AI may come up with the right answers, but even researchers who train AI systems often do not understand how an algorithm reached a specific conclusion. This is very problematic when AI is used in the context of medical diagnoses, for example, or in any environment where decisions have non-trivial consequences. What the algorithm has “learned” remains a mystery to everyone. Even if the AI is right, people will not trust its analytical output.
Artificial Intelligence offers tremendous opportunities and capabilities but it can’t see the world as we humans do. All we need do is work on its weaknesses and have them sorted out rather than have it overly hyped with make-believes and ignore its limitations in plain sight.
Ref: https://thriveglobal.com/stories/...6 -
is there even anyone who thinks natively in rust
if someone knows someone with YouTube channel or something that'd be great
I don't mean explanations but they actually intuitively design in it inside their own heads
generally my brain catches onto that fast but with rust I find i prototype stuff then have to go back and rewrite everything... which is a pain
and now I'm trying to do a complicated iterator object with sub functions and my brain evidently needs to make a leap over like 5 new concepts and I don't know if it's worth the effort or it will go nowhere
but if there's videos of somebody who codes natively, unconscious competence in rust then I could pick up the intuition way faster from watching them
just the problem is any content for rust is made by people who don't seem to really know rust, but are just moonlighting through it or fanboys of it10 -
I wonder if crypto exchanges are so damn vulnerable or just so transparent.
I mean, it is impossible to scroll tech articles for more than a few seconds before stumbling on a report of yet another crypto exchange being nicked a couple hundred mil USD.
- It could be that their security severely sucks (wouldn't blame them for it, most businesses do suck at securing shit).
- It could be that the entire black hat community is putting it's might on stealing money that is so fucking easy to launder.
- It could be that is damn nigh impossible to cover up a crypto hack since the evidence of coins drifting away is forever on display in the public ledger, and in that case crypto companies are not hacked more often than regular companies, they are just much more often publically shamed for it.
- It could be a mix of all the above, but my intuition is that one factor is more relevant.
Which would be the most relevant factor? One of the above or yet another attack vector to the stupidest value conduit ever?5 -
Jesus Fucking Christ can you just guess what the code is doing instead of me feeding it to you like a fucking baby. TRY TO HAVE SOME SORT OF INTUITION DAMNIT I’M TRYING TO HELP YOU SO YOU DON’T LOOK LIKE A DUMBASS.2
-
Facebook Ads docs are a joke, spent 4h before I decided to add a field out of pure intuition because the React Native app won’t build if I follow their docs exactly as they post2
-
Thinking about machine learning and models... without data, there is no model, meaning they are extremely dependent on the data.
But is that intelligence? It seems more like the most basic form of a human that consumes anything it’s given—an indoctrinated, brainwashed slave, in a sense.
True intelligence involves overriding the training through reasoning, raw intellect, intuition, or the ability to question.
If a model is trained solely on the laws of physics and language, can it reason afterward? For example, can it use physics to question the events of 9/11, arguing that the laws of physics do not allow for the free fall of three towers, regardless of the CGI planes shown on TV?
We are intelligent, sentient beings on this planet.
While God is a man-made concept, reality provides us with much evidence of our creation. We are the children of nature, and nature is the first intelligence that gave life to us all.
Do whatever it takes to survive and protect your people.7 -
all this talk of australian crypto laws got me thinking. here's a hypothetical (this might get a little complicated):
for the sake of the security facade, the government decides to not ban encryption outright. BUT they decide that all crypto will use the same key. therefore you can not directly read encrypted things, but it's not really encrypted anymore is it?
part two: there's a concept called chicken sexing, named after people who determine the sex of baby chicks. male chicks are pretty useless and expensive to keep alive, so they are eaten. female chicks go on to lay eggs, so ideally, from a financial standpoint, you only raise hens to maturity. this is nearly impossible to discern early on so at first you're just straight up guessing. is this one female? sure? that one? no? really 50/50. BUT if you have a skilled chicken sexer looking over your shoulder, saying right or wrong, then eventually you get better. why? nobody knows. they can't explain it. nobody can. you just sort of "know" when it's female or not. some people can do 1000s of chicks/hr with success up to 98% but nobody can explain how to tell them apart.
part three. final part:
after years, even decades of using this encryption with only one key, I wonder if people (even if only people who are regularly exposed to crypto like NSA analysts or cryptographers) can ever learn to understand it. in the same way as above. you don't know exactly what it says. or how you know it. you didn't run an algorithm in your head or decrypt it. but somehow you get the gist.
28464e294af01d1845bcd21 roughly translates to "just bought a PS5! WOOT!" or even just pick out details. PS5. excited. bought.
but how do you know that? idk. just do.
oh what a creepy future it has become.8 -
Many smartphone cameras lack the ability to turn off burst shot mode.
The burst shot feature on smartphone camera software is almost always not helpful, only annoying. All it does is spam the storage with useless near-duplicate photos.
"Then simply don't hold the camera shutter button!"
Sometimes, this happens by accident. Or the phone has an I/O lag in the moment of releasing the shutter button, so the release of the shutter button is not registered and burst mode is initiated after the I/O lag.
The only purpose of burst shot seems to be making many low light photos to find one that is not shaken. Even then, there must be an option to turn it off.
Also, the point-and-shoot intuition of holding the camera shutter button to set focus and exposure, and releasing to capture a photo is far more convenient. On newer phones, that has been replaced with highly annoying burst shots.
"Then use a third-party app that does allow turning off burst mode."
The problem with third-party applications is that they are awfully slow, since they can not be optimized for a specific device like pre-installed camera applications are. This slowness, as one might expect, leads to missed moments.
On some smartphones, third-party applications can not even access all camera features, such as 2160p video recording. Some phones use a proprietary API that can only be accessed with the pre-installed camera app.1 -
Experience, intuition and 50-200% risk premium.
For me it is important to not put too much effort in it, as the developer estimation is usually mangled through sales and management anyway and doesn't have much to do with the final price.
And as nobody really bases internal budget and schedule on it as well, it's kind of pointless in most cases. -
According to MIT and some other programmers, as I interpreted it from their video, Computer Science is not a science, but rather an art:
https://youtube.com/watch/...
I'm not sure this is the truth.
First things first. Definition:
- In order for a field to be a science, it has to have an internationally recognized body (such as physics has one). Does computer science have one?
Furthermore, one of the definitions of science:
"a branch of knowledge or study dealing with a body of facts or truths systematically arranged and showing the operation of general laws:"
source: https://dictionary.com/browse/...
- In order for a field to be considered art, its essence has to be about aesthetics.
Now, it's true that Computer Science is not about computers (as they are mere physical manifestations and tools that we use to practice the essence of what are abstract models that we theorize, much like Mathematics is not about numbers).
Like is said in the video (3:39 and example at 4:06): Computer Science is about formalizing intuition of process: input, algorithm, output, the precise imperative knowledge of 'how to' vs. Geometry ('what is' true, i.e. declarative knowledge).
Now, if we're formalizing and being precise, are we being scientific or theoretical? It could be argued we're then being theoretical, except for the case of Applied Computer Science, where things get more scientific (introducing observable proof).
Further elaborate discussion is welcome.
Proceed.4