Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "computation"
-
So I cracked prime factorization. For real.
I can factor a 1024 bit product in 11hours on an i3.
No GPU acceleration, no massive memory overhead. Probably a lot faster with parallel computation on a better cpu, or even on a gpu.
4096 bits in 97-98 hours.
Verifiable. Not shitting you. My hearts beating out of my fucking chest. Maybe it was an act of god, I don't know, but it works.
What should I do with it?241 -
This week I reached a major milestone in a Machine Learning/Music Analysis project that I've been working on for a long time!!
I'm really proud to launch 'The Harmonic Algorithm' as an open source project! It represents the evolution of something that's grown with me through two thesis' (initially in music analysis and later in creative computation) and has been a vessel for my passion in both Music and Computation/Machine Learning for a number of years.
For more info, detailed usage examples (with video clips) and installation instructions for anyone inclined to try it out, have a look at the GitHub repo for the project:
https://github.com/OscarSouth/...
"The Harmonic Algorithm, written in Haskell and R, generates musical domain specific data inside user defined constraints then filters it down and deterministically ranks it using a tailored Markov Chain model trained on ingested musical data. This presents a unique tool in the hands of the composer or performer which can be used as a writing aid, analysis device, for instrumental study or even in live performance."1 -
Me: "It's a balance between three things: you either optimize for computation, memory use, or programming effort. Computers don't have a infinitely fast processors with an infinite amount of memory."
Coworker: "Did anybody tell Java?"3 -
dates are just an index of time
practicing is just offsetting your initial, natural ability in a positive direction
do you guys ever just think of things in an abstract sense?
what are other examples?16 -
Guys, if you have an extra computer lying around, support for the cause by contributing your device as a resource to SETI:
https://setiathome.berkeley.edu
If you discover something, the credit goes to you! If you are not happy, you can contribute to various other projects which are in need of your computation. Kindly consider.7 -
I've assembled enough computing power from the trash. Now I can start to build my own personal 'cloud'. Fuck I hate that word.
But I have a bunch of i7s, and i5s on hand, in towers. Next is just to network them, and setup some software to receive commands.
So far I've looked at Ray, and Dispy for distributed computation. If theres others that any of you are aware of, let me know. If you're familiar with any of these and know which one is the easier approach to get started with, I'd appreciate your input.
The goal is to get all these machines up and running, a cloud thats as dirt cheap as possible, and then train it on sequence prediction of the hidden variables derived from semiprimes. Right now the set is unretrievable, but theres a lot of heavily correlated known variables and so I'm hoping the network can derive better and more accurate insights than I can in a pinch.
Because any given semiprime has numerous (hundreds of known) identities which immediately yield both of its factors if say a certain constant or quotient is known (it isn't), knowing any *one* of them and the correct input, is equivalent to knowing the factors of p.
So I can set each machine to train and attempt to predict the unknown sequence for each particular identity.
Once the machines are setup and I've figured out which distributed library to use, the next step is to setup Keras, andtrain the model using say, all the semiprimes under one to ten million.
I'm also working on a new way of measuring information: autoregressive entropy. The idea is that the prevalence of small numbers when searching for patterns in sequences is largely ephemeral (theres no long term pattern) and AE allows us to put a number on the density of these patterns in a partial sequence, but its only an idea at the moment and I'm not sure what use it has.
Heres hoping the sequence prediction approach works.17 -
Over the last year, I’ve only started learning computer science at uni, never done it before.
I’ve done units in:
- Alg. and programming fundamentals in python
- Intro to comp sci
- alg. and data structures
- theory of computation
Guess the point of this is, “why do people code, what aspirations do you all have?”
Cause rn, I’m all about “I have no idea what I’m doing, coding just seemed cool and I wanted to try it out.” Don’t know where to go
Someone inspire me???
Here is a legit reason for you to brag about what you do and what you’re going to do 😉13 -
First of all, merry christmas to everyone on devrant.
Second, another interesting paper--this time on pattern classification using piecewise linear functions vs classic spiking neural nets.
Supposedly it was a *six million* percent improvement in computation time, versus the spiking simulation. Thats my five minute overview of the document anyway.
Highly unusual application (hadn't seen it done before now but maybe I'm unfamiliar). Check it out:
https://link.springer.com/chapter/...4 -
It’s the last week of “Theory of computation” and now I’m thinking why the hell would I enroll in an 8am lab.11
-
Is it ethical to charge a client for the runtime of a computation? I.e. cpu time?
I usually don't, since it doesn't cost me anything to leave my machine running overnight. But a few of my friends told me that I should.
More context:
I do sometimes freelance work for professors as an RA. At times you need to leave a script running for like +5 hours. During which I just either procrastinate or go to the gym or sleep. The energy cost for computer running is barely a dollar.
I get charged by the hour per work in my timesheet, or sometime for a negotiate price which is also usually computed by the estimated work time it would take.5 -
Windows being like... "60secs left! Nonono wait it's 50mins! Aaah i wanted to say 10. It's definitely 5 now, i am sure this time, 16 is correct!"
God damn it stop wasting computation power for this boolshit and copy faster instead!!!4 -
I would have to say the first start-up I worked with had the worst recruiters. Albeit they were seniors of mine, and not full fledged professionals, but this was pretty ridiculous.
So at the interview(which I won by winning a hackathon in college), they asked me the standard questions about my current knowledge and what I hope to achieve in the company. When they asked me my tech questions, one program that they thought was tough, I solved in 2 minutes. I was interviewing with 3 other people whom hadn't gotten the answer. Naturally I doubt myself due to the lack of answers being produced. The recruiters themselves didnt understand my answer initially. So much so that they were convinced I was wrong(at this time the others were coming up with, and submitting their answers, which the recruiters naturally expected from us). So to give me the benefit of the doubt, they whip out a laptop to run my code, and guess what? It worked, and had NOTICABLY lesser computation speed.
Needless to say I got the job, but the look on my recruiters' faces after exclaiming I was wrong, then they themselves being proven wrong? Priceless. xD4 -
We need to separate concerns. Too many CS courses skip over theory and teach outdated tools and technologies, often those of a sponsor who is failing in the market.
Computer Science is supposed to be about the science and formalisms of computation. The job of programming is Software Engineering. A few colleges have SE degrees, but too few.
No one understands anymore the likes of Knuth, McCarthy, Dijkstra, and Hoare. I'm willing to bet that most of you have never read any of their work. Few people really understand their impact on the tools we use today or the importance of their work. CS courses should teach that and expand on it so we can get more huge leaps in tools and concepts.
But we also need Software Engineering to teach students current tools and the latest paradigms.
CS, as it is, doesn't do that. -
Sooo I’m typically a proponent of physical copy of books, as I’d rather sit and read them, write and take notes. Essentially all my books turn into something out of the “half blood prince” potions book from Harry Potter.
But it’s so inconvenient as either my books are in my office or in the library at home. It ends up being something like connecting a USB... the book I need at the time is always in the opposite place I am in currently.
Also, all the books I want now are newer and none are on the used market. For a reasonable price.
So I gave in a bought an iPad with the hopes of putting the books in pdf form on it... I’ll pay for some PDFs but hey if I can get it free thru a google search then it is what it is lol.
Not sure how I’m gonna adapt to reading on a tablet, as I really prefer a physical book.. hell I still use national brand computation notebooks for all my notes. Nothing beats writing it down, AND I still have an IBM selectric 3 and Swintec, nothing beats sitting down and just letting the thoughts flow neatly on a piece of paper and then glueing it the notebook
Anyway whatcha y’alls thoughts of using an iPad as a digital library of books.. using the Apple Pencil to annotate the book. I bought the 12.9 inch as the screen size is closest to a sheet of paper
Also, I don’t read fiction all the books I read are nonfiction, reference manuals, textbooks, data sheets, user manuals, stuff like the art of computer programming by knuth, Kent beck, Robert Martin, folwler books, etc14 -
They ask me, why do you hate Python? Well, maybe because I prefer fucking warnings and no fucking exit the program after 2 hours of computation if the parameter is unexpected. Fuck off7
-
Day 1:
Optimizing huge problems for the company. Get mail. *sigh*; Why is your script using up half the CPU on our thin clients? *place in complaints folder and go on*
Day 2:
Boss asks about it during scrum meeting. *Oh shits*
Need a cluster. Been asking for it for months...
Day 3:
Start runs on all thin clients. *Thou shall feel my wrath*
Complaints folder floods.
Day 4:
Expect rage from boss.
"IT seems to have found a cluster for you at last."
Finally! -
In the 90s most people had touched grass, but few touched a computer.
In the 2090s most people will have touched a computer, but not grass.
But at least we'll have fully sentient dildos armed with laser guns to mildly stimulate our mandatory attached cyber-clits, or alternatively annihilate thought criminals.
In other news my prime generator has exhaustively been checked against, all primes from 5 to 1 million. I used miller-rabin with k=40 to confirm the results.
The set the generator creates is the join of the quasi-lucas carmichael numbers, the carmichael numbers, and the primes. So after I generated a number I just had to treat those numbers as 'pollutants' and filter them out, which was dead simple.
Whats left after filtering, is strictly the primes.
I also tested it randomly on 50-55 bit primes, and it always returned true, but that range hasn't been fully tested so far because it takes 9-12 seconds per number at that point.
I was expecting maybe a few failures by my generator. So what I did was I wrote a function, genMillerTest(), and all it does is take some number n, returns the next prime after it (using my functions nextPrime() and isPrime()), and then tests it against miller-rabin. If miller returns false, then I add the result to a list. And then I check *those* results by hand (because miller can occasionally return false positives, though I'm not familiar enough with the math to know how often).
Well, imagine my surprise when I had zero false positives.
Which means either my code is generating the same exact set as miller (under some very large value of n), or the chance of miller (at k=40 tests) returning a false positive is vanishingly small.
My next steps should be to parallelize the checking process, and set up my other desktop to run those tests continuously.
Concurrently I should work on figuring out why my slowest primality tests (theres six of them, though I think I can eliminate two) are so slow and if I can better estimate or derive a pattern that allows faster results by better initialization of the variables used by these tests.
I already wrote some cases to output which tests most frequently succeeded (if any of them pass, then the number isn't prime), and therefore could cut short the primality test of a number. I rewrote the function to put those tests in order from most likely to least likely.
I'm also thinking that there may be some clues for faster computation in other bases, or perhaps in binary, or inspecting the patterns of values in the natural logs of non-primes versus primes. Or even looking into the *execution* time of numbers that successfully pass as prime versus ones that don't. Theres a bevy of possible approaches.
The entire process for the first 1_000_000 numbers, ran 1621.28 seconds, or just shy of a tenth of a second per test but I'm sure thats biased toward the head of the list.
If theres any other approach or ideas I may be overlooking, I wouldn't know where to begin.16 -
The next step for improving large language models (if not diffusion) is hot-encoding.
The idea is pretty straightforward:
Generate many prompts, or take many prompts as a training and validation set. Do partial inference, and find the intersection of best overall performance with least computation.
Then save the state of the network during partial inference, and use that for all subsequent inferences. Sort of like LoRa, but for inference, instead of fine-tuning.
Inference, after-all, is what matters. And there has to be some subset of prompt-based initializations of a network, that perform, regardless of the prompt, (generally) as well as a full inference step.
Likewise with diffusion, there likely exists some priors (based on the training data) that speed up reconstruction or lower the network loss, allowing us to substitute a 'snapshot' that has the correct distribution, without necessarily performing a full generation.
Another idea I had was 'semantic centering' instead of regional image labelling. The idea is to find some patch of an object within an image, and ask, for all such patches that belong to an object, what best describes the object? if it were a dog, what patch of the image is "most dog-like" etc. I could see it as being much closer to how the human brain quickly identifies objects by short-cuts. The size of such patches could be adjusted to minimize the cross-entropy of classification relative to the tested size of each patch (pixel-sized patches for example might lead to too high a training loss). Of course it might allow us to do a scattershot 'at a glance' type lookup of potential image contents, even if you get multiple categories for a single pixel, it greatly narrows the total span of categories you need to do subsequent searches for.
In other news I'm starting a new ML blackbook for various ideas. Old one is mostly outdated now, and I think I scanned it (and since buried it somewhere amongst my ten thousand other files like a digital hoarder) and lost it.
I have some other 'low-hanging fruit' type ideas for improving existing and emerging models but I'll save those for another time.6 -
We specified a very optimistic setup for a data science platform for a client....
Minimum one machine with a 16 core CPU with 64GB RAM to process data.....
Client's IT department: Best we can do is an 8 core 16GB server.
Literally what I have on my laptop.
Data scientist doesn't use any out-of-memory data processing framework, e.g. Dask, despite telling him it's the best way to be economical on memory; ipykernel kills the computation anyway because it runs out of memory.
Data scientist has a 64GB machine himself so he says it's fine.
Purpose of the server: rendered pointless.5 -
I had the idea that part of the problem of NN and ML research is we all use the same standard loss and nonlinear functions. In theory most NN architectures are universal aproximators. But theres a big gap between symbolic and numeric computation.
But some of our bigger leaps in improvement weren't just from new architectures, but entire new approaches to how data is transformed, and how we calculate loss, for example KL divergence.
And it occured to me all we really need is training/test/validation data and with the right approach we can let the system discover the architecture (been done before), but also the nonlinear and loss functions itself, and see what pops out the other side as a result.
If a network can instrument its own code as it were, maybe it'd find new and useful nonlinear functions and losses. Networks wouldn't just specificy a conv layer here, or a maxpool there, but derive implementations of these all on their own.
More importantly with a little pruning, we could even use successful examples for bootstrapping smaller more efficient algorithms, all within the graph itself, and use genetic algorithms to mix and match nodes at training time to discover what works or doesn't, or do training, testing, and validation in batches, to anneal a network in the correct direction.
By generating variations of successful nodes and graphs, and using substitution, we can use comparison to minimize error (for some measure of error over accuracy and precision), and select the best graph variations, without strictly having to do much point mutation within any given node, minimizing deleterious effects, sort of like how gene expression leads to unexpected but fitness-improving results for an entire organism, while point-mutations typically cause disease.
It might seem like this wouldn't work out the gate, just on the basis of intuition, but I think the benefit of working through node substitutions or entire subgraph substitution, is that we can check test/validation loss before training is even complete.
If we train a network to specify a known loss, we can even have that evaluate the networks themselves, and run variations on our network loss node to find better losses during training time, and at some point let nodes refer to these same loss calculation graphs, within themselves, switching between them dynamically..via variation and substitution.
I could even invision probabilistic lists of jump addresses, or mappings of value ranges to jump addresses, or having await() style opcodes on some nodes that upon being encountered, queue-up ticks from upstream nodes whose calculations the await()ed node relies on, to do things like emergent convolution.
I've written all the classes and started on the interpreter itself, just a few things that need fleshed out now.
Heres my shitty little partial sketch of the opcodes and ideas.
https://pastebin.com/5yDTaApS
I think I'll teach it to do convolution, color recognition, maybe try mnist, or teach it step by step how to do sequence masking and prediction, dunno yet.6 -
Not so much a problem with the way CS is taught, but I think it's a problem that a lot of people put emphasis ONLY on programming (and maybe data structures and algorithms) and ignore things like Computer Architecture or Theory of Computation.
Most of the CS syllabi I've seen are built very well, but many students (and some teachers) seem to ignore a bunch of subjects because they don't contribute to making them "hireable". -
!rant
Julia's mutlithreading macro is soooo awesome and clean for when you want to do computation work :33 -
So today I got to see one of the most stupid architectural choices I have ever seen.
They have a service-oriented architecture. Mainly Python and Elixir.
A lot of computation goes in the Python services.
And the Elixir services as used to expose RestApi. Basic ones, basically DB proxies.
Not a lot of async, or communication... Just plain CRUD.
Why the fuck do you use Elixir for that?? And now they can't recruit someone... And the CTO doesn't get why it was a stupid choice!!!
And in python, they use async functions with sync DB APIs...1 -
Find an end to this story:
"It was late summer. I just had finished programming my distributed computation Service for servers, i felt so nice. I started the servers; looking at the small, dusty Screen attached to the meters of computers arranged to be a great server. But when it started booting i instantly noticed somethind being wrong:... "3 -
I've seen a lot of buzz around the EU's GDPR and since I don't live there I'm wondering if it applies only if you store personal data and should it count if it's hashed for example?🤔
Let's say you hash a client's IP, it's not technically his data you've irreversibly transformed it into something else, like a computation.
For example let's say he provides you with a number and you multiply it by another and store the result, let's say 2 x 2 = 4, Is the 4 his data or yours?
Also I'm really interested in the general opinion of ranters about article 13.14 -
Its feels so damn good to know we live in a time where computation has become so damn cheap that Amazon gives away a Tesla configured system for around 2.5$ an hour ... Like seriously.. all this progress took it's time .. but still seems really fast .. well .. good for us .. no rant here 😂
-
Many many hours into b i g computation I realized I forgot to save the result :(
(I’m too defeated to cuss right now can one of the experts do it for me thx)1 -
Have my THEORY OF COMPUTATION exam tomorrow 😭
Shit load of YouTube videos left to cover. Turing machine, Chomsky-Normal form, Code generation... I'm so ded. Fuck my soul :/3 -
YGGG IM SO CLOSE I CAN ALMOST TASTE IT.
Register allocation pretty much done: you can still juggle registers manually if you want, but you don't have to -- declaring a variable and using it as operand instead of a register is implicitly telling the compiler to handle it for you.
Whats more, spilling to stack is done automatically, keeping track of whether a value is or isnt required so its only done when absolutely necessary. And variables are handled differently depending on wheter they are input, output, or both, so we can eliminate making redundant copies in some cases.
Its a thing of beauty, defenestrating the difficult aspects of assembly, while still writting pure assembly... well, for the most part. There's some C-like sugar that's just too convenient for me not to include.
(x,y)=*F arg0,argN. This piece of shit is the distillation of my very profound meditations on fuckerous thoughtlessness, so let me break it down:
- (x,y)=; fuck you in the ass I can return as many values as I want. You dont need the parens if theres only a single return.
- *F args; some may have thought I was dereferencing a pointer but Im calling F and passing it arguments; the asterisk indicates I want to jump to a symbol rather than read its address or the value stored at it.
To the virtual machine, this is three instructions:
- bind x,y; overwrite these values with Fs output.
- pass arg0,argN; setup the damn parameters.
- call F; you know this one, so perform the deed.
Everything else is generated; these are macro-instructions with some logic attached to them, and theres a step in the compilation dedicated to walking the stupid program for the seventh fucking time that handles the expansion and optimization.
So whats left? Ah shit, classes. Disinfect and open wide mother fucker we're doing OOP without a condom.
Now, obviously, we have to sanitize a lot of what OOP stands for. In general, you can consider every textbook shit, so much so that wiping your ass with their pages would defeat the point of wiping your ass.
Lets say, for simplicity, that every program is a data transform (see: computation) broken down into a multitude of classes that represent the layout and quantity of memory required at different steps, plus the operations performed on said memory.
That is most if not all of the paradigm's merit right there. Everything else that I thought to have found use for was in the end nothing but deranged ways of deriving one thing from another. Telling you I want the size of this worth of space is such an act, and is indeed useful; telling you I want to utilize this as base for that when this itself cannot be directly used is theoretically a poorly worded and overly verbose bitch slap.
Plainly, fucktoys and abstract classes are a mistake, autocorrect these fucking misspelled testicle sax.
None of the remaining deeper lore, or rather sleazy fanfiction, that forms the larger cannon of object oriented as taught by my colleagues makes sufficient sense at this level for me to even consider dumping a steaming fat shit down it's execrable throat, and so I will spare you bearing witness to the inevitable forced coprophagia.
This is what we're left with: structures and procedures. Easy as gobblin pie.
Any F taking pointer-to-struc as it's first argument that is declared within the same namespace can be fetched by an instance of the structure in question. The sugar: x ->* F arg0,argN
Where ->* stands for failed abortion. No, the arrow by itself means fetch me a symbol; the asterisk wants to jump there. So fetch and do. We make it work for all symbols just to be dicks about it.
Anyway, invoking anything like this passes the caller to the callee. If you use the name of the struc rather than a pointer, you get it as a string. Because fuck you, I like Perl.
What else is there to discuss? My mind seems blank, but it is truly blank.
Allocating multitudes of structures, with same or different types, should be done in one go whenever possible. I know I want to do this, and I know whichever way we settle for has to be intuitive, else this entire project has failed.
So my version of new always takes an argument, dont you just love slurping diarrhea. If zero it means call malloc for this one, else it's an address where this instance is to be stored.
What's the big idea? Only the topmost instance in any given hierarchy will trigger an allocation. My compiler could easily perform this analysis because I am unemployed.
So where do you want it on the stack on the heap yyou want to reutilize any piece of ass, where buttocks stands for some adequately sized space in memory -- entirely within the realm of possibility. Furthermore, evicting shit you don't need and replacing it with something else.
Let me tell you, I will give your every object an allocator if you give the chance. I will -- nevermind. This is not for your orifices, porridges, oranges, morpheousness.
Walruses.16 -
Fuck yeah ... I have uploaded my major computation file to S3 and create Lambdas from those files(includes numpy and pandas also) and now I have only routes and invoke strategies in my EC3 .. looking for cost reduction....
-
On https://reactjs.org/docs/... it is declared that useEffect runs after render is done.
However... if you put into useEffect an expensive calculation or operation e.g. "add +1 to x billion times", it will get stuck after updating the data, but before the re-render is done.
This leads to inconsistency between the DOM and the state which I believe is a foundational point of react. Moreover, the statement that "useEffect runs after render" is false.
See also: https://stackoverflow.com/questions...
The solution is to add a timeout to that expensive operation, e.g. 50 ms so the re-render can finish itself.
The integrity of my belief in react has received a shrapnel today. Argh :D Guys, how this can be? It seems that useEffect is not being run after re-render.13 -
Can someone tell me why this worked
`int percentHealth = (getHealth() * 100) / maxHealth;`
But this didn't work
`int percentHealth = (getHealth() / maxHealth) * 100;`
It stressed me out for +2 hours.8 -
Hey guys... I need some help. I have a react project which needs to fetch around 10MB of JSON data from server and then do a computation on that whole data. The UI completely stops obviously because of the computation. I tried using promises for computation but still nothing. The UI still stops completely. Can someone help please?7
-
Coming up with tests that show we have met the requirements on the project, some of the requirements are "Use method X" to perform this computation, boss says "we can't simply refer to the documentation/source code demonstrating that this method is used" ... WTF....
-
Who the fuck invented recursive😡, it sucks in numerical computation, it takes excessive amount of memory, recomputation and fffffffuck😡. To calculate fibonacci of 50, it feels like MARK I. fuck they do nothing more than calling, like Peter calling Peter...😡8
-
val true : bool = isFrustrated(me : Human)
1) Honestly fuck SML. Who's goddamn idea was it to make a useless fucking programming language that does absolutely nothing relevant unless you're trying to learn recursion. Who's fucking idea was it to not be able to even have side effects. And who gives a shit if you can explicitly declare the type of variables on every single fucking line that's what comments are for if you really need it. All this is aside from the fact that nobody ever has been like "OH UNMUTABLE TYPES? WOW IM SO HAPPY THIS IS SO USEFUL". At this point I feel like SML is basically a DFA - ABSOLUTELY FUCKING USELESS
2) Aside from that, who's idea was it to duplicate two classes. There's 15-122 (Principles of Imperative Computation) and 15-150 (Principles of Functional Programming). So far the ONLY fucking thing different is we learned about work and span in 15-150 - OTHER THAN THAT ITS LIKE TAKING THE EXACT SAME COURSE. BUT AGAIN. So then I have to fucking sit in lecture and pay attention for that tiny bit of information that is new amongst the giant cesspool of information that isn't. BECAUSE I ALREADY LEARNED IT.
Oh and did I mention that both classes are required to graduate as a CS major? Fuck me.
Thanks devRant for helping <3
Edit: We are 4 weeks into the semester so you'd expect we'd have gotten into the new stuff by now right????5 -
Orchid lesson #many:
Church tuples exist only to demonstrate how general substitution is. Just like Church numerals, they aren't meant to be used for real computation and cause a lot of problems. Few type systems and fewer optimizers can deal with them, they're a pain to pass through FFI boundaries, and they're much slower in an interpreted context than a native smart array. And in a lazy language the tuple is almost always lighter than the code that generates it, so you want to generate the tuple eagerly and thunk the actual elements, if thunk you must.
I'll go write a vector based tuple and end this madness tomorrow. New version soon, probably.
With dynamic dispatch.7