Ranter
Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API

From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Comments
-
Wisecrack736758d@Oktokolo Neither.
I want to train an ai that
1. takes a large known set of variables hidden within semiprimes.
2. learns to generate the corresponding constants for a given semiprime. "muh magic numbers" as it were.
3. spits out these constants.
Having the variables and identities tells us something about the semiprime's factors, but not the factors themselves. With the given constants we can derive the full identities, and is equivalent to knowing the factors themselves.
And with a model trained to classify sequences based on the known variables, we can use a known sequence as input to generate the next Nth digit in an unknown constant, feed that digit into the known sequence, and repeat to get the full constant that was previously unknown.
if you know the quotient a/u, but not the factor a, or the denominator u, then knowing u gives you a.
likewise knowing d4a, you could generate d4a/d4.
d4u/u, or (a/u)*(d4u/d4), etc.
Where d4, u, a, are all unknown. -
Wolle90258dI recommend to write all of this down in something like a blog (not in devrant posts). This way you can use it in interviews or talks or stuff and also remember what you did how in the future.
-
Nanos1187658dIf you have expensive electricity costs, you might want to swap those big towers, for AMD laptops, as those appear to use the least wattage.
Though I think the cheapest solution is probably an old Xeon system with 20+ cores.
For example, an AMD laptop here runs @ 9w when idle, with this CPU:
https://amd.com/en/products/...
Xeon's can also give you quad channel memory, as I think memory speed will be one of your bottlenecks.
Along with perhaps storage, as such, NVMe SSD's are perhaps your friend there, with maybe a focus on choosing something with the highest random write speeds.
This way you can help reduce network traffic too with less or even a single machine. -
Nanos1187658dYou might also want to consider adding a GPU or two to each and use them as part of your computing power.
https://en.wikipedia.org/wiki/CUDA
Only available for Nvidia GPU's I think ?
I wonder if AMD do an equivalent ? -
Nanos1187658dA single multi-core machine could also allow you to get a +90% efficient PSU running.
I'm reminded some years ago getting my first 4 core CPU machine, only it sucked 3,000w of power to run it and sounded like a jet engine starting up with all of its huge fans !
Nowadays, that could cost you $5 USD an hour to run.. -
Wisecrack736758d@NeatNerdPrime openstack looks neat, but is absolute overkill. What was your experience with it in any case?
-
Wisecrack736758d@Nanos Basically everything I have was throw aways, but as far as Xeon's go, a cursory glance at the wiki tells me I don't want anything less than 2010-2012.
4-8 cores is the minimum.
Would love to do smaller units but I have to work with what I can basically grab for free, which seems to be i5s and i7s and the occasional graphics card.
On the plus side, plenty of spare parts ¯\_(ツ)_/¯
Thanks for the advice though. I'll keep an eye out for xeon processors (and apparently opteron). Think I passed up a few because they looked kind of old, but if they meet the requirements for core count, then maybe its worth it. -
@Wisecrack well, for regular cloud computing it's all you need, able to group up a bunch of physical machines into a large pool of resources.
If you want several vm's , high availability, across zones, vlans, and shared storage resources, you might want to look into this.
Unless you want to pay for a VMware setup. -
Nanos1187658d@Wisecrack I'm reminded that a lot of my starter kit was thrown away items.
I recently got myself a Dell Precision T5810 Intel Xeon E5-1650 v4, and have now moved from DDR3 to DDR4 land. ( Expensive.. but at least server RAM is cheap ! )
Related link:
https://intel.co.uk/content/www/...
I've seen more core ones go for less, but I wanted something that had a high top speed ( 4Ghz vs my current 3.3Ghz max. ) for single threaded applications, whilst still having more cores available for other apps that can take advantage of them.
And the quad channel RAM I thought would be better than my current triple channel.
I hear you can find these kind of things thrown away these days.
I paid for mine, since I no longer live in a place that kind of thing exists, let alone gets thrown away ! -
Nanos1187658dI'm reminded I'm slightly venturing into that area, with my current PC going to become my backup PC, and my backup PC becoming my bedroom only PC.
So if I need more computing resources, I could turn all three machines on. ( The 3D printer also has a dedicated PC too.. and the house server as well which is that nice 8 core AMD laptop mentioned earlier.
Which reminds me, my new PC I've fitted a PCI / NVMe drive to it, amazing to see the difference in speed between that and my old HD !
My old HD was carefully chosen to be the fastest in its day !
As such, if you are only able to use HD's rather than any SSD's, you probably want to keep disk accessing to a minimum. -
Wisecrack736758d
-
Nanos1187658d@Wisecrack Well, with Windows 10, it would randomly pause for like 6 minutes, now, no pause at all !
I reckon I get about a 5% productivity improvement due to less time waiting for disk activity.
Lets say scale wise a 9. -
Nanos1187658dI'm reminded that the Xeon CPU's often have a large chip cache, so if you could create code that resides there, it would be faster.
Or any CPU onchip cache.
Related link:
https://stackoverflow.com/questions... -
Nanos1187658dNote, it appears you cannot over-provision the NVME's, which is disappointing.
Unless leaving unformatted space works for that purpose ?
I tend to over-provision all my SSD's that I can, for improved performance and increased lifespan. ( Hence why I get 1Tb or larger ones, even though I don't really need that much space ! )
Also the larger u go the better the speed in many cases !
I also go 4 power cut friendly versions, which tend to be more expensive than your bog standard versions.
Even though I'm running an UPS, even that sometimes fails !
Also, most of them, I buy second hand too. ( 1 is 8 years old ! )
Only the NVMe I brought new, since its now my main C: I want it to last a lifetime ! ( The impression I got though was that u could over-provision it, but it appears only the SATA SSD version you can, not the NVMe 1 ! )
But it is a lot faster than the SATA SSD version..
I got a PCI NVMe adapter with a fan, it runs at 24c. ( No room 4 heatsink ! ) -
Wisecrack736757d@NeatNerdPrime also don't forget to configure the turbo encabulator on the nvme either.
-
Nanos1187657dRelated links:
https://kingston.com/unitedkingdom/...
---
drives with less OP will perform well in Read intensive applications, but may be slower in write-intensive applications compared to drives with 32% OP.
---
https://medium.com/@reefland/... -
Nanos1187657dThis might be of interest:
https://hackaday.com/2012/10/...
https://freebasic.net/forum/...
---
[offtopic] 144 fully fledged computers. Just one chip. 20$
---
https://greenarraychips.com/home/... -
@Nanos in my experience, when people set it to allow overprovisioning, they tend to justify it by stating things like 'we simply want to use all compute resources we have at our disposal ' , which in itself is a reasonable argument, however, the very same people tend to also - conveniently or not - forget that they had such a thing as overprovisioning, and then act surprised when their entire cluster grinds to a halt due to that one application using up more then is available
To me cloud computing is like the financial system, don't lend more money then you can pay off, don't go in overleveraged otherwise you might end up really bad. -
Wisecrack736756d@Nanos comes in, spamming his hardwareporn all oblivious like.
Quality shitposting. I don't know what I did to earn that sort of friendship, but never change Nanos. Never change. -
Nanos1187655dTalking of Xeons I stumbled across this earlier:
http://xtremesystems.org/forums/...
---
Overclocker discovers Xeon E5 V3 Errata, Engineers exploit to unlock Turbo
--- -
scor343054d@Wisecrack
A friend prays us this mantra for years now:
Always always always use proxmox as base virtualisation layer.
All OS and other go on top of that. -
Wisecrack736754d
-
Nanos1187654dI just got to measure the wattage of my newer XEON CPU PC vs my older XEON CPU PC, it typically uses 50% less !
I notice the newer ones runs as low as like 1.3Ghz, whilst the older one never dropped below 3.3Ghz. -
Wisecrack736754d@Nanos you're masturbating to it right now arent you?
Over there hogging all the CPU-batin' material.
What would go for in the GPU department? -
Nanos1187650d@Wisecrack I think the only graphics card I lusted after was this one:
https://nascom.wordpress.com/other-... -
Wisecrack736748d@Nanos bruh that GPU straddles the line between ancient and modern like fossil on the KT boundary. Very cool.
-
Nanos1187648d@Wisecrack Glad you liked it.
Talking of fossils, reminds me of:
https://livescience.com/41537-t-rex...
As one day when digging foundations inland, I came across a rock, that when it was cracked open, contained the intact skeleton of a Razor Clam.
You could see where its soft tissue had left an indent inside the rock, but all that was left was its shell.
But before I could retrieve it undamaged, another worker started to smash it to pieces. :-(
It did appear that the shell was not fossilised, but still an actual shell preserved, which isn't supposed to happen to things 65+ million years old..
The sedimentary rock did not show any evidence that the creature had made its way inside, only that the rock had formed around it when it was still alive. ( Maybe it the rock was mud at some point, and slowly got too hard for it to manoeuvre inside of it, leaving it trapped forever in a bolder. -
hjk101529919d@Nanos doing know enough about the ai processing but if data is important you probably want ecc memory. At a certain point the desktop processors are going to win it from the server processors if the latter are too old. AMD makes better desktop processors for workstation usage with the ryzens.
NVME can seriously impact the performance of your setup but not if there is no persistence bottleneck. And the data is not following through the network.
Related Rants
I've assembled enough computing power from the trash. Now I can start to build my own personal 'cloud'. Fuck I hate that word.
But I have a bunch of i7s, and i5s on hand, in towers. Next is just to network them, and setup some software to receive commands.
So far I've looked at Ray, and Dispy for distributed computation. If theres others that any of you are aware of, let me know. If you're familiar with any of these and know which one is the easier approach to get started with, I'd appreciate your input.
The goal is to get all these machines up and running, a cloud thats as dirt cheap as possible, and then train it on sequence prediction of the hidden variables derived from semiprimes. Right now the set is unretrievable, but theres a lot of heavily correlated known variables and so I'm hoping the network can derive better and more accurate insights than I can in a pinch.
Because any given semiprime has numerous (hundreds of known) identities which immediately yield both of its factors if say a certain constant or quotient is known (it isn't), knowing any *one* of them and the correct input, is equivalent to knowing the factors of p.
So I can set each machine to train and attempt to predict the unknown sequence for each particular identity.
Once the machines are setup and I've figured out which distributed library to use, the next step is to setup Keras, andtrain the model using say, all the semiprimes under one to ten million.
I'm also working on a new way of measuring information: autoregressive entropy. The idea is that the prevalence of small numbers when searching for patterns in sequences is largely ephemeral (theres no long term pattern) and AE allows us to put a number on the density of these patterns in a partial sequence, but its only an idea at the moment and I'm not sure what use it has.
Heres hoping the sequence prediction approach works.
random
math
machine learning
distributed computing