14

I've assembled enough computing power from the trash. Now I can start to build my own personal 'cloud'. Fuck I hate that word.
But I have a bunch of i7s, and i5s on hand, in towers. Next is just to network them, and setup some software to receive commands.

So far I've looked at Ray, and Dispy for distributed computation. If theres others that any of you are aware of, let me know. If you're familiar with any of these and know which one is the easier approach to get started with, I'd appreciate your input.

The goal is to get all these machines up and running, a cloud thats as dirt cheap as possible, and then train it on sequence prediction of the hidden variables derived from semiprimes. Right now the set is unretrievable, but theres a lot of heavily correlated known variables and so I'm hoping the network can derive better and more accurate insights than I can in a pinch.
Because any given semiprime has numerous (hundreds of known) identities which immediately yield both of its factors if say a certain constant or quotient is known (it isn't), knowing any *one* of them and the correct input, is equivalent to knowing the factors of p.

So I can set each machine to train and attempt to predict the unknown sequence for each particular identity.

Once the machines are setup and I've figured out which distributed library to use, the next step is to setup Keras, andtrain the model using say, all the semiprimes under one to ten million.

I'm also working on a new way of measuring information: autoregressive entropy. The idea is that the prevalence of small numbers when searching for patterns in sequences is largely ephemeral (theres no long term pattern) and AE allows us to put a number on the density of these patterns in a partial sequence, but its only an idea at the moment and I'm not sure what use it has.

Heres hoping the sequence prediction approach works.

Comments
  • 4
    You want to train an AI to detect or generate primes?!
  • 4
    @Oktokolo Neither.

    I want to train an ai that

    1. takes a large known set of variables hidden within semiprimes.

    2. learns to generate the corresponding constants for a given semiprime. "muh magic numbers" as it were.

    3. spits out these constants.

    Having the variables and identities tells us something about the semiprime's factors, but not the factors themselves. With the given constants we can derive the full identities, and is equivalent to knowing the factors themselves.

    And with a model trained to classify sequences based on the known variables, we can use a known sequence as input to generate the next Nth digit in an unknown constant, feed that digit into the known sequence, and repeat to get the full constant that was previously unknown.

    if you know the quotient a/u, but not the factor a, or the denominator u, then knowing u gives you a.

    likewise knowing d4a, you could generate d4a/d4.

    d4u/u, or (a/u)*(d4u/d4), etc.

    Where d4, u, a, are all unknown.
  • 4
    I recommend to write all of this down in something like a blog (not in devrant posts). This way you can use it in interviews or talks or stuff and also remember what you did how in the future.
  • 2
    Openstack?
  • 1
    @NeatNerdPrime openstack looks neat, but is absolute overkill. What was your experience with it in any case?
  • 2
    @Nanos Basically everything I have was throw aways, but as far as Xeon's go, a cursory glance at the wiki tells me I don't want anything less than 2010-2012.

    4-8 cores is the minimum.

    Would love to do smaller units but I have to work with what I can basically grab for free, which seems to be i5s and i7s and the occasional graphics card.

    On the plus side, plenty of spare parts ¯\_(ツ)_/¯

    Thanks for the advice though. I'll keep an eye out for xeon processors (and apparently opteron). Think I passed up a few because they looked kind of old, but if they meet the requirements for core count, then maybe its worth it.
  • 2
    @Wisecrack well, for regular cloud computing it's all you need, able to group up a bunch of physical machines into a large pool of resources.

    If you want several vm's , high availability, across zones, vlans, and shared storage resources, you might want to look into this.

    Unless you want to pay for a VMware setup.
  • 1
    @Nanos "PCI / NVMe"

    Perceptually, how noticeable is the difference?

    On a scale of 1 to 10?
  • 2
    My golden advice, never overprovision!
  • 1
    @NeatNerdPrime also don't forget to configure the turbo encabulator on the nvme either.
  • 2
    @Nanos in my experience, when people set it to allow overprovisioning, they tend to justify it by stating things like 'we simply want to use all compute resources we have at our disposal ' , which in itself is a reasonable argument, however, the very same people tend to also - conveniently or not - forget that they had such a thing as overprovisioning, and then act surprised when their entire cluster grinds to a halt due to that one application using up more then is available

    To me cloud computing is like the financial system, don't lend more money then you can pay off, don't go in overleveraged otherwise you might end up really bad.
  • 1
    @Nanos comes in, spamming his hardwareporn all oblivious like.

    Quality shitposting. I don't know what I did to earn that sort of friendship, but never change Nanos. Never change.
  • 1
    @Wisecrack
    A friend prays us this mantra for years now:

    Always always always use proxmox as base virtualisation layer.
    All OS and other go on top of that.
  • 1
    @scor I haven't the faintest clue.

    I'm upvoting it because confusion is one of my fetishes.
  • 1
    @Nanos you're masturbating to it right now arent you?

    Over there hogging all the CPU-batin' material.

    What would go for in the GPU department?
  • 0
    @Nanos bruh that GPU straddles the line between ancient and modern like fossil on the KT boundary. Very cool.
  • 1
    @Nanos doing know enough about the ai processing but if data is important you probably want ecc memory. At a certain point the desktop processors are going to win it from the server processors if the latter are too old. AMD makes better desktop processors for workstation usage with the ryzens.

    NVME can seriously impact the performance of your setup but not if there is no persistence bottleneck. And the data is not following through the network.
Add Comment