Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "map function"
-
A story about how a busy programmer became responsible for training interns.
So I was put in charge of a team of interns and had to teach them to work with Linux, coding (Bash, Python and JS) and networking overall.
None of the interns had any technical experience, skills, knowledge or talent.
Furthermore the task came to me as a surprise and I didn't have any training plan nor the time.
Case 0:
Intern is asked to connect to a VM, see which interfaces there are and bring up the one that's down (eth1). He shuts eth0 down and is immediately disconnected from the machine, being unable to connect remotely.
Case 1:
Intern researches Bash scripting via a weird android app and after a hour or so creates and runs this function: test(){test|test&}
He fork-bombed the VM all other interns used.
Case 2:
All interns used the same VM despite the fact that I created one for each.
They saved the same ssh address in Putty while giving it different names.
Case 3:
After explicitly explaining and demonstrating to the interns how to connect to their own VMs they all connect to the same machine and attempt to create file systems, map them and etc. One intern keeps running "shutdown -r" in order to test the delay flag, which he never even included.
Case 4:
All of the interns still somehow connect to the same VM despite me manually configuring their Putty "favorites". Apparently they copy-paste a dns that one of them sent to the entire team via mail. He also learned about the wall command and keeps scaring his team members with fake warnings. A female intern actually asked me "how does the screen knows what I look like?!". This after she got a wall message telling her to eat less because she gained weight.
Case 5:
The most motivated intern ran "rm -rf" from his /etc directory.
P.S. All other interns got disconnected because they still keep using his VM.
Case 6:
While giving them a presentation about cryptography and explaining how SSH (that they've been using for the past two weeks) works an intern asked "So is this like Gmail?".
I gave him the benefit of the doubt and asked if he meant the authorization process. He replied with a stupid smile "No! I mean that it can send things!".
FML. I have a huge project to finish and have to babysit these art majors who decided to earn "ezy cash many" in hightech.
Adventures will be continued.26 -
I’m surrounded by idiots.
I’m continually reminded of that fact, but today I found something that really drives that point home.
Gather ‘round, everybody, it’s story time!
While working on a slow query ticket, I perused the code, finding several causes, and decided to run git blame on the files to see what dummy authored the mental diarrhea currently befouling my screen. As it turns out, the entire feature was written by mister legendary Apple golden boy “Finder’s Keeper” dev himself.
To give you the full scope of this mess, let me start at the frontend and work my way backward.
He wrote a javascript method that tracks whatever row was/is under the mouse in a table and dynamically removes/adds a “.row_selected” class on it. At least the js uses events (jQuery…) instead of a `setTimeout()` so it could be worse. But still, has he never heard of :hover? The function literally does nothing else, and the `selectedRow` var he stores the element reference in isn’t used elsewhere.
This function allows the user to better see the rows in the API Calls table, for which there is a also search feature — the very thing I’m tasked with fixing.
It’s worth noting that above the search feature are two inputs for a date range, with some helpful links like “last week” and “last month” … and “All”. It’s also worth noting that this table is for displaying search results of all the API requests and their responses for a given merchant… this table is enormous.
This search field for this table queries the backend on every character the user types. There’s no debouncing, no submit event, etc., so it triggers on every keystroke. The actual request runs through a layer of abstraction to parse out and log the user-entered date range, figure out where the request came from, and to map out some column names or add additional ones. It also does some hard to follow (and amazingly not injectable) orm condition building. It’s a mess of functional ugly.
The important columns in the table this query ultimately searches are not indexed, despite it only looking for “create_order” records — the largest of twenty-some types in the table. It also uses partial text matching (again: on. every. single. keystroke.) across two varchar(255)s that only ever hold <16 chars — and of which users only ever care about one at a time. After all of this, it filters the results based on some uncommented regexes, and worst of all: instead of fetching only one page’s worth of results like you’d expect, it fetches all of them at once and then discards what isn’t included by the paginator. So not only is this a guaranteed full table scan with partial text matching for every query (over millions to hundreds of millions of records), it’s that same full table scan for every single keystroke while the user types, and all but 25 records (user-selectable) get discarded — and then requeried when the user looks at the next page of results.
What the bloody fucking hell? I’d swear this idiot is an intern, but his code does (amazingly) actually work.
No wonder this search field nearly crashed one of the servers when someone actually tried using it.
Asdfajsdfk.rant fucking moron even when taking down the server hey bob pass me all the paperclips mysql murder terrible code slow query idiot can do no wrong but he’s the golden boy idiots repeatedly murdered mysql in the face21 -
I'm hiring and I'm fucking done with recruiters buttering up skills etc and sending me BS candidates.
Interview earlier today...
CV: MySql skill level 10 (out of 10)
Reality: Can't write a simple JOIN!
Yesterday...
CV: PHP 6+ years exp, self proclaimed ninja/jedi/oracle.
Reality:
[Me]: Write me a function to map an array to x.
[Ninja]: What's an array?
I've come to the conclusion that the type of dev I want on my team is highly unlikely to be looking for work much less using some piece of shit shady agent to find work so I need to hunt him / her down personally and can use the phenomenally large recruiters fee as a hiring bonus / incentive.
Only problem now is finding quality full stack devs in the area (Johannesburg, South Africa).
I'm thinking of posting a 'challenge' job add to filter out good candidates - some kind of code challenge to be solved that gives them my contact info. Any one have any creative ideas I could try?31 -
I messaged a professor at MIT and surprisingly got a response back.
He told me that "generating primes deterministically is a solved problem" and he would be very surprised if what I wrote beat wheel factorization, but that he would be interested if it did.
It didnt when he messaged me.
It does now.
Tested on primes up to 26 digits.
Current time tends to be 1-100th to 2-100th of a second.
Seems to be steady.
First n=1million digits *always* returns false for composites, while for primes the rate is 56% true vs false, and now that I've made it faster, I'm fairly certain I can get it to 100% accuracy.
In fact what I'm thinking I'll do is generate a random semiprime using the suspected prime, map it over to some other factor tree using the variation on modular expotentiation several of us on devrant stumbled on, and then see if it still factors. If it does then we know the number in question is prime. And because we know the factor in question, the semiprime mapping function doesnt require any additional searching or iterations.
The false negative rate, I think goes to zero the larger the prime from what I can see. But it wont be an issue if I'm right about the accuracy being correctable.
I'd like to thank the professor for the challenge. He also shared a bunch of useful links.
That ones a rare bird.21 -
Ok friends let's try to compile Flownet2 with Torch. It's made by NVIDIA themselves so there won't be any problem at all with dependencies right?????? /s
Let's use Deep Learning AMI with a K80 on AWS, totally updated and ready to go super great always works with everything else.
> CUDA error
> CuDNN version mismatch
> CUDA versions overwrite
> Library paths not updated ever
> Torch 0.4.1 doesn't work so have to go back to Torch 0.4
> Flownet doesn't compile, get bunch of CUDA errors piece of shit code
> online forums have lots of questions and 0 answers
> Decide to skip straight to vid2vid
> More cuda errors
> Can't compile the fucking 2d kernel
> Through some act of God reinstalling cuda and CuDNN, manage to finally compile Flownet2
> Try running
> "Kernel image" error
> excusemewhatthefuck.jpg
> Try without a label map because fuck it the instructions and flags they gave are basically guaranteed not to work, it's fucking Nvidia amirite
> Enormous fucking CUDA error and Torch error, makes no sense, online no one agrees and 0 answers again
> Try again but this time on a clean machine
> Still no go
> Last resort, use the docker image they themselves provided of flownet
> Same fucking error
> While in the process of debugging, realize my training image set is also bound to have bad results because "directly concatenating" images together as they claim in the paper actually has horrible results, and the network doesn't accept 6 channel input no matter what, so the only way to get around this is to make 2 images (3 * 2 = 6 quick maths)
> Fix my training data, fuck Nvidia dude who gave me wrong info
> Try again
> Same fucking errors
> Doesn't give nay helpful information, just spits out a bunch of fucking memory addresses and long function names from the CUDA core
> Try reinstalling and then making a basic torch network, works perfectly fine
> FINALLY.png
> Setup vid2vid and flownet again
> SAME FUCKING ERROR
> Try to build the entire network in tensorflow
> CUDA error
> CuDNN version mismatch
> Doesn't work with TF
> HAVE TO FUCKING DOWNGEADE DRIVERS TOO
> TF doesn't support latest cuda because no one in the ML community can be bothered to support anything other than their own machine
> After setting up everything again, realize have no space left on 75gb machine
> Try torch again, hoping that the entire change will fix things
At this point I'll leave a space so you can try to guess what happened next before seeing the result.
Ready?
3
2
1
> SAME FUCKING ERROR
In conclusion, NVIDIA is a fucking piece of shit that can't make their own libraries compatible with themselves, and can't be fucked to write instructions that actually work.
If anyone has vid2vid working or has gotten around the kernel image error for AWS K80s please throw me a lifeline, in exchange you can have my soul or what little is left of it5 -
Trying to hire more good devs... it's surprisingly hard. Guy with supposed decade of JavaScript experience fails code test, "I don't really use map function so I don't know it."
R U kidding me
...and yet my "maybe we should consider remote devs" idea isn't getting any traction :/9 -
Two big moments today:
1. Holy hell, how did I ever get on without a proper debugger? Was debugging some old code by eye (following along and keeping track mentally, of what the variables should be and what each step did). That didn't work because the code isn't intuitive. Tried the print() method, old reliable as it were. Kinda worked but didn't give me enough fine-grain control.
Bit the bullet and installed Wing IDE for python. And bam, it hit me. How did I ever live without step-through, and breakpoints before now?
2. Remember that non-sieve prime generator I wrote a while back? (well maybe some of you do). The one that generated quasi lucas carmichael (QLC) numbers? Well thats what I managed to debug. I figured out why it wasn't working. Last time I released it, I included two core methods, genprimes() and nextPrime(). The first generates a list of primes accurately, up to some n, and only needs a small handful of QLC numbers filtered out after the fact (because the set of primes generated and the set of QLC numbers overlap. Well I think they call it an embedding, as in QLC is included in the series generated by genprimes, but not the converse, but I digress).
nextPrime() was supposed to take any arbitrary n above zero, and accurately return the nearest prime number above the argument. But for some reason when it started, it would return 2,3,5,6...but genprimes() would work fine for some reason.
So genprimes loops over an index, i, and tests it for primality. It begins by entering the loop, and doing "result = gffi(i)".
This calls into something a function that runs four tests on the argument passed to it. I won't go into detail here about what those are because I don't even remember how I came up with them (I'll make a separate post when the code is fully fixed).
If the number fails any of these tests then gffi would just return the value of i that was passed to it, unaltered. Otherwise, if it did pass all of them, it would return i+1.
And once back in genPrimes() we would check if the variable 'result' was greater than the loop index. And if it was, then it was either prime (comparatively plentiful) or a QLC number (comparatively rare)--these two types and no others.
nextPrime() was only taking n, and didn't have this index to compare to, so the prior steps in genprimes were acting as a filter that nextPrime() didn't have, while internally gffi() was returning not only primes, and QLCs, but also plenty of composite numbers.
Now *why* that last step in genPrimes() was filtering out all the composites, idk.
But now that I understand whats going on I can fix it and hypothetically it should be possible to enter a positive n of any size, and without additional primality checks (such as is done with sieves, where you have to check off multiples of n), get the nearest prime numbers. Of course I'm not familiar enough with prime number generation to know if thats an achievement or worthwhile mentioning, so if anyone *is* familiar, and how something like that holds up compared to other linear generators (O(n)?), I'd be interested to hear about it.
I also am working on filtering out the intersection of the sets (QLC numbers), which I'm pretty sure I figured out how to incorporate into the prime generator itself.
I also think it may be possible to generator primes even faster, using the carmichael numbers or related set--or even derive a function that maps one set of upper-and-lower bounds around a semiprime, and map those same bounds to carmichael numbers that act as the upper and lower bound numbers on the factors of a semiprime.
Meanwhile I'm also looking into testing the prime generator on a larger set of numbers (to make sure it doesn't fail at large values of n) and so I'm looking for more computing power if anyone has it on hand, or is willing to test it at sufficiently large bit lengths (512, 1024, etc).
Lastly, the earlier work I posted (linked below), I realized could be applied with ECM to greatly reduce the smallest factor of a large number.
If ECM, being one of the best methods available, only handles 50-60 digit numbers, & your factors are 70+ digits, then being able to transform your semiprime product into another product tree thats non-semiprime, with factors that ARE in range of ECM, and which *does* contain either of the original factors, means products that *were not* formally factorable by ECM, *could* be now.
That wouldn't have been possible though withput enormous help from many others such as hitko who took the time to explain the solution was a form of modular exponentiation, Fast-Nop who contributed on other threads, Voxera who did as well, and support from Scor in particular, and many others.
Thank you all. And more to come.
Links mentioned (because DR wouldn't accept them as they were):
https://pastebin.com/MWechZj912 -
I didn't leave, I just got busy working 60 hour weeks in between studying.
I found a new method called matrix decomposition (not the known method of the same name).
Premise is that you break a semiprime down into its component numbers and magnitudes, lets say 697 for example. It becomes 600, 90, and 7.
Then you break each of those down into their prime factorizations (with exponents).
So you get something like
>>> decon(697)
offset: 3, exp: [[Decimal('2'), Decimal('3')], [Decimal('3'), Decimal('1')], [Decimal('5'), Decimal('2')]]
offset: 2, exp: [[Decimal('2'), Decimal('1')], [Decimal('3'), Decimal('2')], [Decimal('5'), Decimal('1')]]
offset: 1, exp: [[Decimal('7'), Decimal('1')]]
And it turns out that in larger numbers there are distinct patterns that act as maps at each offset (or magnitude) of the product, mapping to the respective magnitudes and digits of the factors.
For example I can pretty reliably predict from a product, where the '8's are in its factors.
Apparently theres a whole host of rules like this.
So what I've done is gone an started writing an interpreter with some pseudo-assembly I defined. This has been ongoing for maybe a month, and I've had very little time to work on it in between at my job (which I'm about to be late for here if I don't start getting ready, lol).
Anyway, long and the short of it, the plan is to generate a large data set of primes and their products, and then write a rules engine to generate sets of my custom assembly language, and then fitness test and validate them, winnowing what doesn't work.
The end product should be a function that lets me map from the digits of a product to all the digits of its factors.
It technically already works, like I've printed out a ton of products and eyeballed patterns to derive custom rules, its just not the complete set yet. And instead of spending months or years doing that I'm just gonna finish the system to automatically derive them for me. The rules I found so far have tested out successfully every time, and whether or not the engine finds those will be the test case for if the broader system is viable, but everything looks legit.
I wouldn't have persued this except when I realized the production of semiprimes *must* be non-eularian (long story), it occured to me that there must be rich internal representations mapping products to factors, that we were simply missing.
I'll go into more details in a later post, maybe not today, because I'm working till close tonight (won't be back till 3 am), but after 4 1/2 years the work is bearing fruit.
Also, its good to see you all again. I fucking missed you guys.9 -
I am currently refactoring some code which exists before my time in this company.
The code was so inefficient before. To put into perspective for every function call it used to loop through some data 100+ times .
I replaced it with a map and voila, no more loops anymore.
The person who wrote this code don't even realise how bad his code was. He sits besides me writing more stupid hacky code for other parts of the app.3 -
Unreal Engine adventures:
me: So ok, I need a map from int to String
Unreal: ya but it's called TMap, FCompactPoseBoneIndex and FName.
me: ..uhhh ok whatever
...
me: ok for debugging, please print this
Unreal: FName is not a string
me: k. Fname.toString().
Unreal: ya but it aint a TChar array now
...
IT'S A FKING STRING JUST PRINT IT. And the other guy is still an int with extra steps! Come the fuck on now....
I mean, honestly, a logging function that cannot print a fking FString? sigh...
Man, I miss python and blender...8 -
!rant
the most popular ecommerce solution in php is a massive (cosmological scale) pile of corporate crap (magento) and the next most popular is an abomination (opencart)
after fucking around with both for a month (the client asked for the project to be using only one of the two) I'm still barely reaching any results, and most of my time is wasted with the stupid bloated spaghetti that is opencart FUCK THIS,
like seriously. who the fuck writes a single line three left joins sql querry with four or five aliases a couple concacts and a bunch sorting fuckeries just to query the categories list, then just query the details of the specific category from a different function,
also why the fuck map each language string manually. or the fucking hardcoded seo urls, or the use of myisam for all tables, and no fucking foreign keys, let that settle for a minute, no foreign keys, the delete method in the model has at least a twenty lines, and then he came with the genius idea of duplicating models, in the front and the backend, accessing the same data, as the same user, but different naming conventions
I'm going to convince him to use something sane like codeigniter/laravel/fuelphp or I'll deny the project8 -
Please fucking tell me there's a better cleaner way to write this render() function?
The use of so many "in-line" code evaluations, arrow functions, (), {}, ...
Just spent like probably 30 minutes just trying to figure out what closes what...
And the author is inconsistent?
Sometimes he uses map( location => { return ... })
other times he uses setState((prevState) => ({ key: this.state.keyValue}))
And there's no note as to why... are they interchangeable, used in specific cases, does it matter????!!!!!!
Or is he just trying to demonstrate 1000 different ways you can say the same thing in JS?
!@#!#@!$#%#$!@#!@#!#$$%30 -
Today I was asked a question on JavaScript regarding the difference between normal `for` loop and a `map` function on an array with regards to closures. Can anyone help me understand? #javascript #interview12
-
Just a short reply to whoever deleted his rant because he didn't read the JS docs.
map is a map. You don't just dump a function call into there. You map the value to el and the value is then set to the function result.
This works:
new Array(38).fill("10").map(el => parseInt(el));6 -
Data wrangling is messy
I'm doing the vegetation maps for the game today, maybe rivers if it all goes smoothly.
I could probably do it by hand, but theres something like 60-70 ecoregions to chart,
each with their own species, both fauna and flora. And each has an elevation range its
found at in real life, so I want to use the heightmap to dictate that. Who has time for that? It's a lot of manual work.
And the night prior I'm thinking "oh this will be easy."
yeah, no.
(Also why does Devrant have to mangle my line breaks? -_-)
Laid out the requirements, how I could go about it, and the more I look the more involved
it gets.
So what I think I'll do is automate it. I already automated some of the map extraction, so
I don't see why I shouldn't just go the distance.
Also it means, later on, when I have access to better, higher resolution geographic data, updating it will be a smoother process. And even though I'm only interested in flora at the moment, theres no reason I can't reuse the same system to extract fauna information.
Of course in-game design there are some things you'll want to fudge. When the players are exploring outside the rockies in a mountainous area, maybe I still want to spawn the occasional mountain lion as a mid-tier enemy, even though our survivor might be outside the cats natural habitat. This could even be the prelude to a task you have to do, go take care of a dangerous
creature outside its normal hunting range. And who knows why it is there? Wild fire? Hunted by something *more* dangerous? Poaching? Maybe a nuke plant exploded and drove all the wildlife from an adjoining region?
who knows.
Having the extraction mostly automated goes a long way to updating those lists down the road.
But for now, flora.
For deciding plants and other features of the terrain what I can do is:
* rewrite pixeltile to take file names as input,
* along with a series of colors as a key (which are put into a SET to check each pixel against)
* input each region, one at a time, as the key, and the heightmap as the source image
* output only the region in the heightmap that corresponds to the ecoregion in the key.
* write a function to extract the palette from the outputted heightmap. (is this really needed?)
* arrange colors on the bottom or side of the image by hand, along with (in text) the elevation in feet for reference.
For automating this entire process I can go one step further:
* Do this entire process with the key colors I already snagged by hand, outputting region IDs as the file names.
* setup selenium
* selenium opens a link related to each elevation-map of a specific biome, and saves the text links
(so I dont have to hand-open them)
* I'll save the species and text by hand (assuming elevation data isn't listed)
* once I have a list of species and other details, to save them to csv, or json, or another format
* I save the list of species as csv or json or another format.
* then selenium opens this list, opens wikipedia for each, one at a time, and searches the text for elevation
* selenium saves out the species name (or an "unknown") for the species, and elevation, to a text file, along with the biome ID, and maybe the elevation code (from the heightmap) as a number or a color (probably a number, simplifies changing the heightmap later on)
Having done all this, I can start to assign species types, specific world tiles. The outputs for each region act as reference.
The only problem with the existing biome map (you can see it below, its ugly) is that it has a lot of "inbetween" colors. Theres a few things I can do here. I can treat those as a "mixing" between regions, dictating the chance of one biome's plants or the other's spawning. This seems a little complicated and dependent on a scraped together standard rather than actual data. So I'm thinking instead what I'll do is I'll implement biome transitions in code, which makes more sense, and decouples it from relying on the underlaying data. also prevents species and terrain from generating in say, towns on the borders of region, where certain plants or terrain features would be unnatural. Part of what makes an ecoregion unique is that geography has lead to relative isolation and evolutionary development of each region (usually thanks to mountains, rivers, and large impassible expanses like deserts).
Maybe I'll stuff it all into a giant bson file or maybe sqlite. Don't know yet.
As an entry level programmer I may not know what I'm doing, and I may be supposed to be looking for a job, but that won't stop me from procrastinating.
Data wrangling is fun.1 -
MongoDB database with really relational data. One main collection that had refs to four other collections, all of those references necessary to populate data for a page view. Complicated aggregate to populate all the necessary data and then filter based on criteria selected by the user. And then the client decides that he wants the information to be sortable by column. Some of those columns are fields on the main model, no problem. Others are fields on the refs, which is more of a problem. Especially given that these refs aren’t one single object. They’re arrays of objects.
The revelation was that I could just write an aggregate function to flat map the main collection, returning only the fields necessary for the search, and output it to a new collection and instead use that new collection for displaying and filtering/sorting search results.
But you can’t run the aggregate all the time, you surely say. If anything changes in the main collection, it won’t be reflected in the search results!
Mongoose post(‘findOneAndUpdate’) hooks, my friends. Mongoose post(‘findOneAndUpdate’) hooks.
Never been so happy to have a thing working properly in my life.2 -
5 fucking days of Google search after Google search. Error after fucking error. Deadline getting closer by the fucking minute. teammates interrupting me every 10 minutes over discord asking for help on their fucking part of the project
and it turns out the solution was just one damn line
One fucking line in a forEach to iterate over the model data sending the necessary aspects to the Javascript function to create map pins for the database locations
5 fucking horrible days all amounts to 1 line
Really shows how much I still have to learn. And the yelling at my screen reveals my need to take an anger management class1 -
stop using arrow function everywhere!!!!!!!
what that is mean ?
fns.reduce( (prev, fn) => fn(prev), input)
Are this is `fns.reduce` with two parameters
Or arrow function that return `input` variable.
take your time to visual parsing this crap4 -
(I know this rant won't gather much attention, maybe there are just a bunch of people that know Redux and still less that used it in Angular).
I feel so bad, really, I just want to throw everything against the wall. I really hate ngrx, I hate redux and how it's de facto implemented in Angular. I talked with other developers and everyone around says that redux is hated only by people that don't understand it, and well, maybe it's stupid, but I hate it.
It's so different from Angular plain programming, why the hell I need to create a index.ts file? It looks so wrong.
Why the hell import * as reducer, why don't you just import the reducer?
Why do you need a switch statement? Really? We're in 2018, languages as python removed it, in the era of reactive programming why don't you just map a key to a function?
Why so many files? Why for a 20 rows module I've to write 5 files each of them twice longer?
Why so much boilerplate? The time spent at implementing everything will be ever gained back?
Why does everything looks so wrong?3 -
Made a proof of concept combination of React + Highland.js + Recompose: https://codepen.io/hedgepig/pen/...
It's scrappy now, but the idea is a streaming alternative to redux/mobx whatever. This nice thing is one can treat events as a function over time, meaning one can map, pipe, reduce (scan), zip etc.
Going to try it on a side project (potentially Hive Sim: https://devrant.io/collabs/975778) and see how it goes. -
If anyone has a moment.
curious if i'm fucking something up.
model:
self.linear_relu_stack = nn.Sequential(
nn.Linear(11, 13),
nn.ReLU(),
# nn.Linear(20, 20),
# nn.ReLU(),
nn.Linear(13,13),
nn.ReLU(),
nn.Linear(13,8),
nn.Sigmoid()
)
Inputs:
def __init__(self, targetx, targety, velocityx, velocityy, reloadtime, theta, phi, exitvelocity, maxtrackx, maxtracky,splashradius) -> None:
# map to 1 and 2
self.Target: XY = XY(targetx, targety)
# map to 3 and 4
self.TargetVel: XY = XY(velocityx, velocityy)
# TODO: this may never be necessary as targeting and firing is the primary objective
# map to 5, probably not yet needed may never be.
self.ReloadTime:float = reloadtime
# map to 6 and 7
self.TurretOrientation: Orientation = Orientation(theta, phi)
# map tp 8
self.MuzzleVelocity:float = exitvelocity
# map to 9 and 10, see i don't remember the outcome of this
# but i feel it should work. after countless bits of training data added.
# i can see how this would fuck up if exact values were off or there was a precision error
# maybe firing should be controlled by something else ?
self.MaxTrackSpeed: Orientation = Orientation(maxtrackx, maxtracky)
# these are for sigmoid output, any positive value of x will produce between 0.5 and 1.0 as return value
# from the sigmoid function.
self.OutMin = 0.5
self.OutMax = 1.0
# this is the number of meters radius that damage still occurs when a projectile lands.
# to be used for calculating where a hit will occur.
self.SplashRadius:float = splashradius
Outputs:
def __init__(self, firenow, clockwise,cclockwise,up,down,oor, hspeed, vspeed) -> None:
self.FireNow = float(firenow)
self.RotateClockWise = float(clockwise)
self.RotateCClockWise = float(cclockwise)
self.MoveUp = float(up)
self.Down = float(down)
self.OutOfRange = float(oor)
self.vspeed = float(vspeed)
self.hspeed = float(hspeed)9 -
Probably pythons map function either that or the pool. Map function because I'm lazy and I want my data now!!
-
Why the fuck would you use a Java Optional in your Scala library. As a Scala novice I just spent about 30 minutes wondering why my map function wouldn't compile 😠
-
I'm in a big fat fucking stinking rut, as in progress on this project has absolutely stagnanted.
Gonna rubber face your duck now **UNZIPS** excepts I don't have zippers, as joggers are the one true way; fake Adidas til I fucking drop.
Brain damage aside, I understand both how I've layed out the data and what I'm supposed to do with it. We have a virtual machine, an array of instructions and arguments for a given process within it, and we need to walk this array and map values to registers.
We also need to spill values inside registers to stack, IF they are required at a further point within that block. This also isn't terribly complex. We simply look forward in the array and see if the value is an argument to any instruction that *needs* this value to be loaded (ie, within a register).
So this implies multiple iterations; we need to better understand how one particular value is used throughout an F before we can make a final decision on how many registers and stack space are actually needed for the whole block.
Here's where it gets tricky. If there's a call, we need to be certain that the symbol being invoked has already been fully processed. Besides the obvious fact that recursion fucks me up, there's another matter: say a private method gets invoked by another private method. We can take advantage of this, by which I mean, sacrilege incoming so put on this toga.
Looking at the output for C compilers, it would seem this is not done in practice, I would assume because it's a pain in the ass. But when you have the guarantee that F will only be called internally, as that's what "private" means, there's two ways it can go:
0. It's well below the 13-20 cycle threshold, so you inline the fucker. No suprises there.
1. It's a more involved affaire, and invoked in more than one place, so you don't inline it. Codesize matters.
Recursion and [1] are the big deal things holding me back. Not because it's too hard, like I said this is kindergarten level abstraction. I'm just slow and fanatical, which is how I prefer to spell "constant obsessive paranoid delusions". I can see the potential optimization I can pull here, so I'm stuck trying to figure it out.
Idea would be, handling the register allocation and stack spill for an internal-internal (or deep internal; what we like to call a "guts" method) in synchronization with the *calling* processes. This is, fundamentally, violating all conventions -- but so under the hood no one will notice.
Let me give you an example. If we were to pass some value to a function, expecting to mutate it and get a different value back, in a lot of cases it'd be stupid to make an implicit copy by using two registers, one for input and another for the output. Dude, it's one cycle. Multiply it by a million, say sixty times per second, for every time you __needlessly__ make a copy of a value that we've already stated is mutable.
Clearly unacceptable. This is, in the strictest sense, everywhere in every single codebase. Premature micro optimization is the root of all goodness, God is great and praiseworthy. So how do we go about it?
Answer is I know and I don't know. By which I mean to say, this very thing I've done by hand. Assembly is fun. Now the issue is teaching a calculator how to do it. Not so fun.
There is a dependency chain between processes, as I believe I've kind of alluded to. I'm trying to make decisions on the side of the caller depending on the details of the callee, which is why recursion is rawdogging my soul. This is the same situation, it's inverting the direction of one or more links in the dependency chain, which makes no fucking sense.
And yet it does.
Brain, explain yourself.
How do *you* handle this without crashing?
Brain?
<<ME STEWPED; BEEP-BOOP>>
Alright then, that was a useless attempt at fuckery. Let's have a nap then, maybe it'll come to me in the morning. That's what I've been saying to myself for almost a month now.
Perhaps it is a hardcoded fuk.1 -
Just something I've been thinking on for a while:
How could programming be done if we couldn't use ordinary if-statements (but functional set operations such as map, filter, with an if- in the lambda function etc. is alright).
Could it work? Also would it be possible to reduce the amount of while loops by using functions for most of the "loop situations" as well?4 -
Was asked to look at another teams repo to see how they use Cassandra. In that repo, I found a function that creates a map[int]bool populated with a handful of numbers all with true as the value. The function then checks the existence of an int in that map and return a true if it exists.