Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "data mutation"
-
A few minutes ago, I was going crazy over a bug caused due to data mutation in Rails.
Basically, user1 was creating a post record and it's stored in the database normally however, on page reload for a second user, user2, the post (that user1 created) was update to belong to user2. This is because, on page reloads, I was using `<<` method call to append user2 and user1 posts together! Apparently, `<<` not only mutates the array, it also performs a database update.
Kill me please!!
Also, data immutability seems a more reasonable feature in languages now.1 -
Hello,
I have gone through all the options that your public API has for syncing data, and i can now officially say that stripping an iFrame of a Google Drive page would be better than the piece of shit mutation methods you have come up with.
Most sincerely,
A fucking annoyed dev that just wasted about 4 hrs on your shit. -
Calling in all Vue devs here! (Possibly any SPA dev actually)
We're building these fancy live-edit fields for our app. It syncs with the database with every keypress (with a debounce, ofc). Now, we're having a global Vuex module to keep track of the applications sync state. Using this module, we can prevent the user from leaving the page if there is data that hasn't been synced. Though, I think I'm doing something wrong here, and not strictly adhering to the "single source of truth"-principle.
When a user has finished typing, a request is made through Axios. When the response arrives, the field issuing the request updates it's display accoring to the response. However, there is also an Axios interceptor which updates the global state to reflect the latest response. Is this wrong? Should the fields themselves emit the mutation to the store? Or is it okay to use an interceptor since they're running down the same call stack?
I think my biggest worry here is that the interceptor and the field will interpret the response differently...
Help is appreciated :D (and thanks for taking the time)18 -
I had the idea that part of the problem of NN and ML research is we all use the same standard loss and nonlinear functions. In theory most NN architectures are universal aproximators. But theres a big gap between symbolic and numeric computation.
But some of our bigger leaps in improvement weren't just from new architectures, but entire new approaches to how data is transformed, and how we calculate loss, for example KL divergence.
And it occured to me all we really need is training/test/validation data and with the right approach we can let the system discover the architecture (been done before), but also the nonlinear and loss functions itself, and see what pops out the other side as a result.
If a network can instrument its own code as it were, maybe it'd find new and useful nonlinear functions and losses. Networks wouldn't just specificy a conv layer here, or a maxpool there, but derive implementations of these all on their own.
More importantly with a little pruning, we could even use successful examples for bootstrapping smaller more efficient algorithms, all within the graph itself, and use genetic algorithms to mix and match nodes at training time to discover what works or doesn't, or do training, testing, and validation in batches, to anneal a network in the correct direction.
By generating variations of successful nodes and graphs, and using substitution, we can use comparison to minimize error (for some measure of error over accuracy and precision), and select the best graph variations, without strictly having to do much point mutation within any given node, minimizing deleterious effects, sort of like how gene expression leads to unexpected but fitness-improving results for an entire organism, while point-mutations typically cause disease.
It might seem like this wouldn't work out the gate, just on the basis of intuition, but I think the benefit of working through node substitutions or entire subgraph substitution, is that we can check test/validation loss before training is even complete.
If we train a network to specify a known loss, we can even have that evaluate the networks themselves, and run variations on our network loss node to find better losses during training time, and at some point let nodes refer to these same loss calculation graphs, within themselves, switching between them dynamically..via variation and substitution.
I could even invision probabilistic lists of jump addresses, or mappings of value ranges to jump addresses, or having await() style opcodes on some nodes that upon being encountered, queue-up ticks from upstream nodes whose calculations the await()ed node relies on, to do things like emergent convolution.
I've written all the classes and started on the interpreter itself, just a few things that need fleshed out now.
Heres my shitty little partial sketch of the opcodes and ideas.
https://pastebin.com/5yDTaApS
I think I'll teach it to do convolution, color recognition, maybe try mnist, or teach it step by step how to do sequence masking and prediction, dunno yet.6 -
When every related field has a god damn different way of working with the data on hand..
For example:
`tht_date` ("Y-m-d", Date) - expiration date on the product, hence, there can be multiple of the same products with a different THT
`tht_alert` ("-2 months", varchar, DateTime modify mutation string) - sending an alert when this interval is hit, and being the activator of the tht_date field (unless value is "none")
`tht_minimum` ("28", integer, quantity of days before tht_date) - to lock them from being sent out/collected.
...
How would you expect this ×not× to become a friggin' spaghetti when trying to resolve the best row ID?
These values are in the wrong spot in the first place, then they also act entirely different in relation to eachother..
I hate the person that set this up, for doing this. When is the madness going to stop...
FFS!! -
SWIFT 3 sucks! SUCKS I TELL YOU! swift 3 changes the NSError class to its own Error class, now the categories (i.e the extensions) that I have added to the NSError class (like convenience inits and NSDictionary map to my own variables) are ALL LOST !!! MORE THAN 100 LINES OF CODE LOST!! because of this piece of shit mutation of the DATA TYPE ITSELF!!! when objective C code is used in Swift (using the mix-match technique) DONT UPGRADE TO swift 3.0 GUYS DONT DO IT!!! especially if you have legacy code in your project !!2