Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "so many indexes"
-
I hired a woman for senior quality assurance two weeks ago. Impressive resume, great interview, but I was met with some pseudo-sexist puzzled looks in the dev team.
Meeting today. Boss: "Why is the database cluster not working properly?"
Team devs: "We've tried diagnosing the problem, but we can't really find it. It keeps being under high load."
New QA: "It might have something to do with the way you developers write queries".
She pulls up a bunch of code examples with dozens of joins and orderings on unindexed columns, explains that you shouldn't call queries from within looping constructs, that it's smart to limit the data with constraints and aggregations, hints at where to actually place indexes, how not to drag the whole DB to the frontend and process it in VueJS, etc...
New QA: "I've already put the tasks for refactoring the queries in Asana"
I'm grinning, because finally... finally I'm not alone in my crusade anymore.
Boss: "Yeah but that's just that code quality nonsense Bittersweet always keeps nagging about. Why is the database not working? Can't we just add more thingies to the cluster? That would be easier than rewriting the code, right?"
Dev team: "Yes... yes. We could try a few more of these aws rds db.m4.10xlarge thingies. That will solve it."
QA looks pissed off, stands up: "No. These queries... they touch the database in so many places, and so violently, that it has to go to therapy. That's why it's down. It just can't take the abuse anymore. You could add more little brothers and sisters to the equation, but damn that would be cruel right? Not to mention that therapy isn't exactly cheap!"
Dev team looks annoyed at me. My boss looks even more annoyed at me. "You hired this one?"
I keep grinning, and I nod.
"I might have offered her a permanent contract"45 -
I stare through the blueish black backgrounds and blurry colorful syntax into a somewhat familiar office within a mirrored world. That damned reflective glass layer covering these meaningless pixels is certainly not on my side.
The rushing sound of transactions flowing through cables is silenced today. Some blood cloth in the invoicing system is zeroing out everything after the currency mark.
While sighing I spin a one-and-a-half pirouette on my desk chair — even when desperate, you shouldn't give up on style — I take three steps away from my screen and try to harmonize my thoughts.
So much noise, everywhere... Noise from within?
I have been stuck at the apogee of an inhale for a while now. Locked into some masochistic constriction, self-punishment for the blindness which stings my ego.
Just fucking take a deep breath you asshole...
I freeze in place, and fall backwards.
Patterns on the creamy drywall rapidly vibrate and synchronize on vivid rhythms of respiration and resonating basslines. Deep indigo rainbows ripple through tiny veins, in-between chalky grains, raining as fine magenta dust through the ceiling frames.
My bare feet slide over soft oscillating concrete, fine flows of unsievable sand surrounded by toes, toes surrounded by streaming variables veiled in obscure vile abstractions.
A jadegreen field of vectored compressions resiliently rumbles and bounces through the clearances and corners of the vibrant concrete office cave, whispering in tongues. I try to voice my woes in little blips and bleeps but I seem to be missing an asymmetric key to their shrouded sequenced speech.
Suddenly, a wild turbulence breaks up all signals.
Joanna floats by in her tipsy effervescent cloud of disordered black hair and alcohol perfume, one hand grasping grapes, her other waving at me.
With every finger she moves a thousand tensors propagating paradoxically flawed but perfect pieces of an intricate surreal picture, sketching whole constellations of possible paths throughout the leafs of the giant Ficus next to her desk.
She stops dead in her tracks, and asks somewhat hypocritically: "Are you high?"
I can not discern the meaning of her words, and respond stoically.
"Joanna! Check out those branches!".
"Pun intended?", she giggles.
I'm focused on her grapeless hand, her fingers stretching to reach the lush little tree.
On touch, the plant shivers, grappled in the tight net of the puppet master. She pulls her strings, applying measured weights, all nodes normalize, and Joanna speaks in an oddly soft tone:
"Isn't it beautiful, how so many models emulate nature"
Her cheek buried in foliage she babbles on about unbalanced search trees and machine learning models... but from the tips of her fingers tables and indexes flow into the plant. Users, payments, tariffs, invoices and taxes crawl over the bark, joining at thicker branches, joining at the stem....
Joining. JOINING. A JOIN.
"IF THERE'S NO FUCKING TAX MULTIPLIER IN THIS LEFT JOIN, EVERYTHING COALESCES TO ZERO" I shout at a perplexed Joanna who squeezes grape juice over her desk. I hop on the beat to my keyboard. She looks puzzled, hugs her Ficus tightly, and reaches for the whiskey bottle behind her monitor.
Attracted by my exclamation, Tom from finance swings open the door, while I push my branch.
I look at Joanna still half hiding between the leaves, and I laugh at her: "Branches! Oh, lame, I finally got it!"
Tom's heavy voice interrupts me: "Does this mean... does this mean that the invoicing bug is resolved?".
I smile at Tom with his tailored suit and waxed hair. "The money is flowing once more. All debts are being settled."
He releases his breath in relief, which he seems to have held since that morning as well.
Joanna adds: "Although I think he is forever indebted to my Ficus".
I nod.14 -
I've optimised so many things in my time I can't remember most of them.
Most recently, something had to be the equivalent off `"literal" LIKE column` with a million rows to compare. It would take around a second average each literal to lookup for a service that needs to be high load and low latency. This isn't an easy case to optimise, many people would consider it impossible.
It took my a couple of hours to reverse engineer the data and implement a few hundred line implementation that would look it up in 1ms average with the worst possible case being very rare and not too distant from this.
In another case there was a lookup of arbitrary time spans that most people would not bother to cache because the input parameters are too short lived and variable to make a difference. I replaced the 50000+ line application acting as a middle man between the application and database with 500 lines of code that did the look up faster and was able to implement a reasonable caching strategy. This dropped resource consumption by a minimum of factor of ten at least. Misses were cheaper and it was able to cache most cases. It also involved modifying the client library in C to stop it unnecessarily wrapping primitives in objects to the high level language which was causing it to consume excessive amounts of memory when processing huge data streams.
Another system would download a huge data set for every point of sale constantly, then parse and apply it. It had to reflect changes quickly but would download the whole dataset each time containing hundreds of thousands of rows. I whipped up a system so that a single server (barring redundancy) would download it in a loop, parse it using C which was much faster than the traditional interpreted language, then use a custom data differential format, TCP data streaming protocol, binary serialisation and LZMA compression to pipe it down to points of sale. This protocol also used versioning for catchup and differential combination for additional reduction in size. It went from being 30 seconds to a few minutes behind to using able to keep up to with in a second of changes. It was also using so much bandwidth that it would reach the limit on ADSL connections then get throttled. I looked at the traffic stats after and it dropped from dozens of terabytes a month to around a gigabyte or so a month for several hundred machines. The drop in the graphs you'd think all the machines had been turned off as that's what it looked like. It could now happily run over GPRS or 56K.
I was working on a project with a lot of data and noticed these huge tables and horrible queries. The tables were all the results of queries. Someone wrote terrible SQL then to optimise it ran it in the background with all possible variable values then store the results of joins and aggregates into new tables. On top of those tables they wrote more SQL. I wrote some new queries and query generation that wiped out thousands of lines of code immediately and operated on the original tables taking things down from 30GB and rapidly climbing to a couple GB.
Another time a piece of mathematics had to generate all possible permutations and the existing solution was factorial. I worked out how to optimise it to run n*n which believe it or not made the world of difference. Went from hardly handling anything to handling anything thrown at it. It was nice trying to get people to "freeze the system now".
I build my own frontend systems (admittedly rushed) that do what angular/react/vue aim for but with higher (maximum) performance including an in memory data base to back the UI that had layered event driven indexes and could handle referential integrity (overlay on the database only revealing items with valid integrity) or reordering and reposition events very rapidly using a custom AVL tree. You could layer indexes over it (data inheritance) that could be partial and dynamic.
So many times have I optimised things on automatic just cleaning up code normally. Hundreds, thousands of optimisations. It's what makes my clock tick.4 -
Biggest challenge I overcame as dev? One of many.
Avoiding a life sentence when the 'powers that be' targeted one of my libraries for the root cause of system performance issues and I didn't correct that accusation with a flame thrower.
What the accusation? What I named the library. Yep. The *name* was causing every single problem in the system.
Panorama (very, very expensive APM system at the time) identified my library in it's analysis, the calls to/from SQLServer was the bottleneck
We had one of Panorama's engineers on-site and he asked what (not the actual name) MyLibrary was and (I'll preface I did not know or involved in any of the so-called 'research') a crack team of developers+managers researched the system thoroughly and found MyLibrary was used in just about every project. I wrote the .Net 1.1 MyLibrary as a mini-ORM to simplify the execution of database code (stored procs, etc) and gracefully handle+log database exceptions (auto-logged details such as the target db, stored procedure name, parameter values, etc, everything you'd need to troubleshoot database errors). This was before Dapper and the other fancy tools used by kids these days.
By the time the news got to me, there was a team cobbled together who's only focus was to remove any/every trace of MyLibrary from the code base. Using Waterfall, they calculated it would take at least a year to remove+replace MyLibrary with the equivalent ADO.Net plumbing.
In a department wide meeting:
DeptMgr: "This day forward, no one is to use MyLibrary to access the database! It's slow, unprofessionally named, and the root cause of all the database issues."
Me: "What about MyLibrary is slow? It's excecuting standard the ADO.Net code. Only extra bit of code is the exception handling to capture the details when the exception is logged."
DeptMgr: "We've spent the last 6 weeks with the Panorama engineer and he's identified MyLibrary as the cause. Company has spent over $100,000 on this software and we have to make fact based decisions. Look at this slide ... "
<DeptMgr shows a histogram of the stacktrace, showing MyLibrary as the slowest>
Me: "You do realize that the execution time is the database call itself, not the code. In that example, the invoice call, it's the stored procedure that taking 5 seconds, not MyLibrary."
<at this point, DeptMgr is getting red-face mad>
AreaMgr: "Yes...yes...but if we stopped using MyLibrary, removing the unnecessary layers, will make the code run faster."
<typical headknodd-ers knod their heads in agreement>
Dev01: "The loading of MyLibrary takes CPU cycles away from code that supports our customers. Every CPU cycle counts."
<headknod-ding continues>
Me: "I'm really confused. Maybe I'm looking at the data wrong. On the slide where you highlighted all the bottlenecks, the histogram shows the latency is the database, I mean...it's right there, in red. Am I looking at it wrong?"
<this was meeting with 20+ other devs, mgrs, a VP, the Panorama engineer>
DeptMgr: "Yes you are! I know MyLibrary is your baby. You need to check your ego at the door and face the facts. Your MyLibrary is a failed experiment and needs to be exterminated from this system!"
Fast forward 9 months, maybe 50% of the projects updated, come across the documentation left from the Panorama. Even after the removal of MyLibrary, there was zero increases in performance. The engineer recommended DBAs start optimizing their indexes and other N+1 problems discovered. I decide to ask the developer who lead the re-write.
Me: "I see that removing MyLibrary did nothing to improve performance."
Dev: "Yes, DeptMgr was pissed. He was ready to throw the Panorama engineer out a window when he said the problems were in the database all along. Didn't you say that?"
Me: "Um, so is this re-write project dead?"
Dev: "No. Removing MyLibrary introduced all kinds of bugs. All the boilerplate ADO.Net code caused a lot of unhandled exceptions, then we had to go back and write exception handling code."
Me: "What a failure. What dipshit would think writing more code leads to less bugs?"
Dev: "I know, I know. We're so far behind schedule. We had to come up with something. I ended up writing a library to make replacing MyLibrary easier. I called it KnightRider. Like the TV show. Everyone is excited to speed up their code with KnightRider. Same method names, same exception handling. All we have to do is replace MyLibrary with KnightRider and we're done."
Me: "Won't the bottlenecks then point to KnightRider?"
Dev: "Meh, not my problem. Panorama meets primarily with the DBAs and the networking team now. I doubt we ever use Panorama to look at our C# code."
Needless to say, I was (still) pissed that they had used MyLibrary as dirty word and a scapegoat for months when they *knew* where the problems were. Pissed enough for a flamethrower? Maybe.6 -
PHP arrays.
The built-in array is also an hashmap. Actually, it's always a hashmap, but you can append to it without specifying indexes and PHP will use consecutive integers. Its performance characteristics? Who knows. Oh, and only strings, ints and null are valid keys.
What's the iteration order for arrays if you use them as hashmaps (string keys)? Well, they have their internal order. So it's actually an ordered hashmap that's being called an array. And you can produce an array which has only integer keys starting with 0, but with non-sequential internal (iteration) order.
This array weirdness has some non-trivial implications. `json_encode` (serializes argument to JSON) assumes an array corresponds to a JSON array if its keys are consecutive integers in increasing order starting with 0, otherwise the array becomes a JSON object. `array_filter` (filters arrays/hashmaps using callback predicate) preserves keys, so it will punch holes in the int key sequence if non-last items are removed, thus turning arrays into hashmaps and changing your JSON structure if you forget to discard keys before serialization.
You may wonder how JSON deserialization works, then? There's a special class for deserialized JSON objects, `stdClass`. It's basically a hashmap too, but it's an object, not an array, and all functions that would normally accept arrays won't work with it. So basically its only use is JSON (de)serialization. You can even cast arrays to objects, producing `stdClass`.
Bonus PHP trivia:
Many functions return nonsensical values. `preg_match`, the regex matching function, returns 1 for success, 0 for no matches and false for malformed regular expression. PHP supports exceptions, so it could just throw one on errors. It would even make more sense to return true, false and null for these three cases. But no, 1, 0 and false. And actual matches are returned by output arg.
`array_walk_recursive`, a function supposed to recursively apply callback to each element of an array. That's what docs say. It actually applies it to leafs only. It will also silently accept object instead of array and "walk" it, but without recursing into deeper objects.
Runtime type enforcing is supported for function arguments and returned values. You can use scalar types, classes, array, null and a few special keywords. There's also a `mixed` keyword, which is used in docs and means "anything". It's syntactically valid, the parser will accept it, but it matches no values in runtime. Calling such function will always cause a runtime error.
Strings can be indexed with negative integers. Arrays can't.
ReflectionClass::newInstanceWithoutConstructor: "Creates a new class instance without invoking the constructor". This one needs no commentary.
`array_map` is pretty self-explanatory if you call it with a callback and an array. Or if you provide more arrays of equal length via varargs, callback will be called with more arguments, one from each array. Makes sense so far. Now, you can also call `array_map` with null instead of callback. In that case it treats provided arrays as rows of a matrix and returns that matrix, transposed.5 -
Upscaling a prod database which was running on an 8 year old Dell desktop used as server. It had about 2MB of RAM and an Intel Core 2 processor...
This was the day I've learned a lot about querying the database as efficient as humanly possible.3 -
FUCK ME IN MY INDICES.
FUCK THE GPUS IN THEIR INDICES.
I mean... I understand (roughly) why the meshes are sent to gpu in this form, but at the same time...
...there's a reason why first thing I did when I was coding my procedural geometry generation library, was abstracting away all of that stuff...
...sadly, as many useful things, when I was looking for that lib on the start of this contract, I couldn't find it. and I was like "doesn't matter, this is a simple thing, using the library would be just a lazy overkill anyway".
well, fuck.
two hours of playing around with two fucking triangles, trying to figure out which indexes are pointing to the correct vertices in a list containing FOUR outline paths.
(lower inner, upper inner, lower outer, upper outer, exacly in this order).
i mean, yeah, it's actually pretty straightforward stuff... for someone not as dumb as me =D
you just have two offsets, one that jumps you to start of the upper path, another that jumps you to the start of the outer path, then it's just
0 + upOffset to get the vertex extruded upwards from the zeroth of the inner path, or
0 + outOffset to get the zeroth from the outer outline, or
0 + outOffset + upOffset, to get the one extruded from zeroth outer vertex...
and so on.
simple stuff, then you just replace the zero with loop control var, put them in the right order, and voilá! walls!
except... whatever, why am I describing in such detail, not necessary, you're not my rubber duck =D
in short, figuring out which fuckin vertex is which, when the list contains ...well, any number of points, and you need to plug the gap between last and first points of the paths, where you need to wrap around the list...
...has proven to be surprisingly hard for me.
funny how much I love doing these things with meshes, despite how bad I am at doing them, which makes me hate doing them despite loving it =D2 -
We are having a history lesson updating a system that was built around 1985.
It's a custom built sales and customer tracker, programmed in Clipper, which is a superset of xBase, that is a language that appears to be data orientated. DosBox and Dosemu have both failed to run it, the programs loads and indexes just fine, but when it gets to the program dashboard it shows the options and doesn't seem to accept any input, though it appears to be running as the time updates (any ideas?)
Tried compiling the source using harbour, compilation fails, something about "time" having too many arguments and other obscure errors. Urgh.
Dbf files are easily converted and opened but really we want to view the working program to see the relations so we can translate the data models.
It's both fascinating and infuriating at the same time. -
It's a shame that people don't want to use F# but prise C# for how cool it became and continue becoming. At the same time, little do they know that many of the features were simply drawn from F#.
It's just rediculous how far this OO and C-Style syntax crap has progressed. They keep copying things from functional langugages, making the initial language to be a monstrocity like C++ is now, insted of just using languages like C#. I mean, it was right there before C#: async/task, immutablility, records, indexes, lambdas, non-null by default, who the hell knows what else.
Besides, many people (in my company at least) are just blindly overengineering with patterns and shit, where a simple function would be just enogh.
Watch some some NDC talks about F#, in particular those of Scott Wlaschin. It's just better in so many ways: less noice (I'm looking at you, brackets, commas and semicolons), the whole LOT of type inference and less duplication (just look at the C# signatures of linq methods - it's difficult to read them), immutability by default, non-nullable by default, ADTs and pattern matching, some neat features like type providers (how many times have used "paste special" or an online tool to create C# classes from a JSON/XML file, and how many times have your regenrated it because of schema changes?) and units of measure.
Of course, in some cases it's not optimal, in some cases mutable datastructures of C# are better for performance. But dude, how many performance critical systems have you wrote in C#? I mean, if it comes to performance you should use Rust or C++ or C after all.
*sighs*15 -
Tldr:
Can't fucking figure out why I'm the only one who can't solve a DP problem in code, when me and friends use the same idea and no one knows why only mine doesn't work...
We are given a task to solve a problem using DP. My friends write their code with the same idea as a solution. Copying the code is not allowed. I follow the same idea but my code won't work. Others look into it, in case they find errors. They can't find any.
The problem (for reference):
Given a fixed list of int's a = (a_1,a_2,...,a_n) and b = (b_1,b_2,...,b_n), a_i and b_i >= 0, a.length == b.length
We want to maximize the sum of a_i's chosen. Every a_i is connected with the b_i at the same index. b_i tells us how many indexes of a we have to skip if choosing the corresponding a_i, so list index of b_i + b_i's value + 1 would be the position of the next a_i available.
The idea:
Create a new list c with same length as a (or b).
Begin at the end of c and save a_n at the same position in c. Iterate backwards through c and at each position add the max value of all previously saved values of c (with regards to the b_i-restriction) with the current a_i, else a_i + 0 if the b_i-resctriction goes beyond the list.
Return the max value of c.
How does that not work for me but for the others?? Funny enough, a few given samples work with my code. I'm questioning my coding ability...7 -
Manjaro has some quirks that annoy me(no MST timezone, spotty support for my WD NVME), so I decided that since I'm not interested in any pre-configured graphical desktop of any kind, I should just dive into Arch, since it increasingly felt like that's what I was doing anyway but with Manjaro to dull the blow. So I did, and I am over the moon for doing so. Lots of gnashed teeth, but DDG indexes an answer to every question I've had, and it always makes sense when I find it. I've enjoyed having to dive into systemd in a much more low-level way than ever before-- to actually LEARN what it's doing, how, and why.
But one by one, I have been faced with some issue that I need to resolve, and one by one, I've knocked them off. The result now is the best work and gaming desktop I have ever used.
Arch is not for geniuses or wizards. Just patient people who are willing to read. The payoff is staggering, and many times over worth the effort.4 -
I hate the elasticsearch backup api.
From beginning to end it's an painful experience.
I try to explain it, but I don't think I will be able to cover it all.
The core concept is:
- repository (storage for snapshots)
- snapshots (actual backup)
The first design flaw is that every backup in an repository is incremental. ES creates an incremental filesystem tree.
Some reasons why this is a bad idea:
- deletion of (older) backups is slow, as newer backups need to be checked for integrity
- you simply have to trust ES that it does the right thing (given the bugs it has... It seems like a very bad idea TM)
- you have no possibility of verification of snapshots
Workaround... Create many repositories as each new repository forces an full backup.........
The second thing: ES scales. Many nodes / es instances form a cluster.
Usually backup APIs incorporate these in their design. ES does not.
If an index spans 12 nodes and u use an network storage, yes: a maximum of 12 nodes will open an eg NFS connection and start backuping.
It might sound not so bad with 12 nodes and one index...
But it get's pretty bad with 100s of indexes and several dozen nodes...
And there is no real limiting in ES. You can plug a few holes, but all in all, when you don't plan carefully your backups, you'll get a pretty f*cked up network congestion.
So traffic shaping must be manually added. Yay...
The last thing is the API itself.
It's a... very fragile thing.
Especially in older ES releases, the documentation is like handing you a flex instead of toilet paper for a wipe.
Documentation != API != Reality.
Especially the fault handling left me more than once speechless...
Eg:
/_snapshot/storage/backup
gives you a state PARTIAL
/_snapshot/storage/backup/_status
gives you a state SUCCESS
Why? The first one is blocking and refers to the backup status itself. The second one shouldn't be blocking and refers to the backup operation.
And yes. The backup operation state is SUCCESS, while the backup state might be PARTIAL (hence no full backup was made, there were errors).
So we have now an additional API that we query that then wraps the API of elasticsearch. With all these shiny scary workarounds like polling, since some APIs are blocking which might lead to a gateway timeout...
Gateway timeout? Yes. Since some operations can run a LONG (multiple hours) time and you don't want to have a ton of open connections hogging resources... You let the loadbalancer kill it. Most operations simply run in ES in the background, while the connection was killed.
So much joy and fun, isn't it?
Now add the latest SMR scandal and a few faulty (as in SMR instead of CMD) hdds in a hundred terabyte ZFS pool and you'll get my frustration level.
PS: The cluster has several dozen terabyte and a lot od nodes. If you have good advice, you're welcome - but please think carefully about this fact.
I might have accidentially vaporized people sending me links with solutions that don't work on large scale TM.2 -
Heres the initial upgraded number fingerprinter I talked about in the past and some results and an explanation below.
Note that these are wide black images on ibb, so they appear as a tall thin strip near the top of ibb as if they're part of the website. They practically blend in. Right click the blackstrip and hit 'view image' and then zoom in.
https://ibb.co/26JmZXB
https://ibb.co/LpJpggq
https://ibb.co/Jt2Hsgt
https://ibb.co/hcxrFfV
https://ibb.co/BKZNzng
https://ibb.co/L6BtXZ4
https://ibb.co/yVHZNq4
https://ibb.co/tQXS8Hr
https://paste.ofcode.org/an4LcpkaKr...
Hastebin wouldn't save for some reason so paste.ofcode.org it is.
Not much to look at, but I was thinking I'd maybe mark the columns where gaps occur and do some statistical tests like finding the stds of the gaps, density, etc. The type test I wrote categorizes products into 11 different types, based on the value of a subset of variables taken from a vector of a couple hundred variables but I didn't want to include all that mess of code. And I was thinking of maybe running this fingerprinter on a per type basis, set to repeat, and looking for matching indexs (pixels) to see what products have in common per type.
Or maybe using them to train a classifier of some sort.
Each fingerprint of a product shares something like 16-20% of indexes with it's factors, so I'm thinking thats an avenue to explore.
What the fingerprinter does is better explained by the subfunction findAb.
The code contains a comment explaining this, but basically the function destructures a number into a series of division and subtractions, and makes a note of how many divisions in a 'run'.
Typically this is for numbers divisible by 2.
So a number like 35 might look like this, when done
p = 35
((((p-1)/2)-1)/2/2/2/2)-1
And we'd represent that as
ab(w, x, y, z)
Where w is the starting value 35 in this case,
x is the number to divide by at each step, y is the adjustment (how much to subtract by when we encounter a number not divisible by x), and z is a string or vector of our results
which looks something like
ab(35, 2, 1, [1, 4])
Why [1,4]
because we were only able to divide by 2 once, before having to subtract 1, and repeat the process. And then we had a run of 4 divisions.
And for the fingerprinter, we do this for each prime under our number p, the list returned becoming another row in our fingerprint. And then that gets converted into an image.
And again, what I find interesting is that
unknown factors of products appear to share many of these same indexes.
What I might do is for, each individual run of Ab, I might have some sort of indicator for when *another* factor is present in the current factor list for each index. So I might ask, at the given step, is the current result (derived from p), divisible by 2 *and* say, 3? If so, mark it.
And then when I run this through the fingerprinter itself, all those pixels might get marked by a different color, say, make them blue, or vary their intensity based on the number of factors present, I don't know. Whatever helps the untrained eye to pick up on leads, clues, and patterns.
If it doesn't make sense, take another look at the example:
((((p-1)/2)-1)/2/2/2/2)-1
This is semi-unique to each product. After the fact, you can remove the variable itself, and keep just the structure in question, replacing the first variable with some other number, and you get to see what pops out the otherside.
If it helps, you can think of the structure surrounding our variable p as the 'electron shell', the '-1's as bandgaps, and the runs of '2's as orbitals, with the variable at the center acting as the 'nucleus', with the factors of that nucleus acting as the protons and neutrons, or nougaty center lol.
Anyway I just wanted to share todays flavor of insanity on the off chance someone might enjoy reading it.1