Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "index-1"
-
Popularity of programming languages according to the DRRDSI (DevRant Rubber Duck Selling Index):
1. JS
2. Java
3. Python
4. C#
5. PHP
6. C++
7. Ruby
8. SQL
9. Swift20 -
Colleague: I really wish array index in all languages would start from 1. If I ever write a language the index will start from 1.
Me:7 -
After working as a developer for 4-5 years I finally took up school again.
The teacher at our first programming course insisted that we named all our variables in our locale language (swedish) and always started arrays at index 1.18 -
I had to open the desktop app to write this because I could never write a rant this long on the app.
This will be a well-informed rebuttal to the "arrays start at 1 in Lua" complaint. If you have ever said or thought that, I guarantee you will learn a lot from this rant and probably enjoy it quite a bit as well.
Just a tiny bit of background information on me: I have a very intimate understanding of Lua and its c API. I have used this language for years and love it dearly.
[START RANT]
"arrays start at 1 in Lua" is factually incorrect because Lua does not have arrays. From their documentation, section 11.1 ("Arrays"), "We implement arrays in Lua simply by indexing tables with integers."
From chapter 2 of the Lua docs, we know there are only 8 types of data in Lua: nil, boolean, number, string, userdata, function, thread, and table
The only unfamiliar thing here might be userdata. "A userdatum offers a raw memory area with no predefined operations in Lua" (section 26.1). Essentially, it's for the API to interact with Lua scripts. The point is, this isn't a fancy term for array.
The misinformation comes from the table type. Let's first explore, at a low level, what an array is. An array, in programming, is a collection of data items all in a line in memory (The OS may not actually put them in a line, but they act as if they are). In most syntaxes, you access an array element similar to:
array[index]
Let's look at c, so we have some solid reference. "array" would be the name of the array, but what it really does is keep track of the starting location in memory of the array. Memory in computers acts like a number. In a very basic sense, the first sector of your RAM is memory location (referred to as an address) 0. "array" would be, for example, address 543745. This is where your data starts. Arrays can only be made up of one type, this is so that each element in that array is EXACTLY the same size. So, this is how indexing an array works. If you know where your array starts, and you know how large each element is, you can find the 6th element by starting at the start of they array and adding 6 times the size of the data in that array.
Tables are incredibly different. The elements of a table are NOT in a line in memory; they're all over the place depending on when you created them (and a lot of other things). Therefore, an array-style index is useless, because you cannot apply the above formula. In the case of a table, you need to perform a lookup: search through all of the elements in the table to find the right one. In Lua, you can do:
a = {1, 5, 9};
a["hello_world"] = "whatever";
a is a table with the length of 4 (the 4th element is "hello_world" with value "whatever"), but a[4] is nil because even though there are 4 items in the table, it looks for something "named" 4, not the 4th element of the table.
This is the difference between indexing and lookups. But you may say,
"Algo! If I do this:
a = {"first", "second", "third"};
print(a[1]);
...then "first" appears in my console!"
Yes, that's correct, in terms of computer science. Lua, because it is a nice language, makes keys in tables optional by automatically giving them an integer value key. This starts at 1. Why? Lets look at that formula for arrays again:
Given array "arr", size of data type "sz", and index "i", find the desired element ("el"):
el = arr + (sz * i)
This NEEDS to start at 0 and not 1 because otherwise, "sz" would always be added to the start address of the array and the first element would ALWAYS be skipped. But in tables, this is not the case, because tables do not have a defined data type size, and this formula is never used. This is why actual arrays are incredibly performant no matter the size, and the larger a table gets, the slower it is.
That felt good to get off my chest. Yes, Lua could start the auto-key at 0, but that might confuse people into thinking tables are arrays... well, I guess there's no avoiding that either way.13 -
I bet everyone here knows these two situations:
1. You have a bug, show the code to somebody else for debugging and the bug is gone, but as soon as you're alone again, it reappears.
2. Your program works fine, you want to show somebody what you accomplished and...
IndexOutOfBoundsException: The index was outside the bounds of the array.11 -
What is a pointer?
A descriptive and ELI5 answer found on Reddit:
You have a house.
When you’re outside, and you want to go home, you don’t make another house right where you are, because it’s too big for you to carry around or take apart.
So you carry a piece of paper or store on your phone the address of your house. Now you always know exactly how to find your house so that you can go home.
The piece of paper or your phone is a variable.
The variable contains an address (a reference) to your house.
You can make a copy of this piece of paper and hand it out to your friends when you invite them over, instead of building each friend a copy of your house.
You can have an address book filled with pages, where each page is an address (i.e., an array of pointers). Each page you turn (each index you increment) goes to the directly next address-containing variable.
Now if you cut the address book in half along its height, and then attach the lower half behind the upper half, then you have a book with smaller pages but more pages.
You can store phone numbers in this, and even if it’s the same total size, you have double the number of pages and double the number of phone numbers (if you store one number on each page).
Now, since the pointers to home-addresses are different from pointers to phone-numbers, turning the page in an address book (increment pointer by 1) moves forward by one address.
But turning the page in the phone book (incrementing pointer by 1) leads you to the next phone number, even if you technically turned only “half a page”.
That’s how pointer arithmetic works.
Source: https://reddit.com/r/...8 -
Allright, I'm pissed.
Warning: more than 4k characters written by a non native english speaker ahead.
Legend:
Storytelling
> Short summary of the current situation
> "Something being said"
> (Something being thought)
* Actions *
-- Background --
In an attempt to reorganize my desktop I accidentally deleted a folder I called "development". In there I stored links to all my IDEs (Not sure how you call these in english), but also some workspaces like unity (Not much stuff there, processing (just some hobby stuff) AND Eclipse (FUCKING EVERYTHING RELATED TO SCHOOL WEB DEVELOPMENT). Now 3 days have passed and I realized this important folder was missing. Cleared that windows trash the instant I deleted the trash on my desktop.
> Shit, Regret
Install a file restore programm. Do every possible search. Nothing found.
> Big shit
Deadline was in like 3 days. Week was fucking rough so:
> "Screw this, the teacher nevet corrects the assignments and also fuck JSP"
Fast forward 2 months to last week. Teacher starts checking assignments.
> Fuck
* Sees pattern: Only students with missing or bad marks are checked. *
* Feels save *
Teacher approaching me while working on current projects.
* Doesn't feel save anymore *
> "Well, I'ld like to see your THAT programm"
> Well fuck
* Tells the truth *
> "Well that's unfortunate, but I must write a mark. Do you really have nothing to show?"
* Remember that I worked on the school pcs when I started *
> (Better than nothing. Gotta try it)
* Teacher checks programm, not pleased *
> (Fuck me, but at least it's over...)
> Nope
* Teacher calls me over *
> "With the mark I had to write today you can't reach that good mark even with a good examination, what are we gonna do about this?"
> "Well, there were other assignments that were never checked. Could we replace that mark with one of those?"
* Teacher agrees *
> (Srly bless this guy for that support)
My best choice was an Android app we had to develop during December in pairs. I did the front end (90% of the whole work) and my partner the backend (10 %). I also did 30 % of these 10 %, because I had to review the shit he wasn't able to debug himself.
> brainlogic.exe provided by windows vista
This distribution was partly my fault since I overestimated the work needed for the backend, but also the fault of that fucker. I mean, he didn't tell me the professor already provided 90 % of the backend...
Rest of the week was really busy (always 1 or 2 things to study for each day, workout and family stuff).
Yesterday (It's past 12 already) I arrived at ~9 pm in the dorm I could finally start reviewing my code.
Internet gets shut down at 10 pm.
Gotta hurry.
* Opens project *
* Sees half a year old code *
* Fights urge to puke *
> (Alright I gotta do this. For the mark!)
* waits for gradle to index files *
* Remembers the fact that I haven't opened Android Studio in the last 2 months *
For those who don't develop with android studio: This is an equivalent to ~10k windows updates waiting to be installed
> (Well, gotta work with this kinda old version)
"gradle sync failed"
> ( Ok, just restart it. You're fine )
* Android Studio doesn't react anymore and/or renders *
* Waits 5 min *
* Restarts laptop *
* Android Studio is reacting again*
"gradle is synching"
9:45 pm: gradle is done and I can finally compile my app
> FML
* Sees App launched on phone *
* Almost pukes again *
> (This was the assigment for the UX chapter, so design doesn't matter)
UX is decent. Proceeds with testing stuff. Save paths work, but some bugs can be caused by going of it
* fixes as much as possible *
* Takes quick look at backend *
Date date = new Date (GregorianCalender.getInstance().getTimeInMillis());
C'mon, I asked you to be the backend. You got 90% of the methods already written by the teacher and had 2 months to write the interfaces to my Front end AND you come up with shits like that.
Note: this example is a minor example of brainlogic.exe
I did what I could to make improve my situation. Hopefully he doesn't discover the bugs. And If it's a backend bug then I could't care less, since that was not my job!
Wish me luck for today!undefined web development jsp school assignment not my job fuck up android studio tldr; not getting paid enough for this shit gradle blame backend9 -
TLDR - you shouldn't expect common sense from idiots who have access to databases.
I joined a startup recently. I know startups are not known for their stable architecture, but this was next level stuff.
There is one prod mongodb server.
The db has 300 collections.
200 of those 300 collections are backups/test collections.
25 collections are used to store LOGS!! They decided to store millions of logs in a nosql db because setting up a mysql server requires effort, why do that when you've already set up mongodb. Lol 😂
Each field is indexed separately in the log.
1 collection is of 2 tb and has more than 1 billion records.
Out of the 1 billion records, 1 million records are required, the rest are obsolete. Each field has an index. Apparently the asshole DBA never knew there's something called capped collection or partial indexes.
Trying to get approval to clean up the db since 3 months, but fucking bureaucracy. Extremely high server costs plus every week the db goes down since some idiot runs a query on this mammoth collection. There's one single set of credentials for everything. Everyone from applications to interns use the same creds.
And the asshole DBA left, making me in charge of handling this shit now. I am trying to fix this but am stuck to get approval from business management. Devs like these make me feel sad that they have zero respect for their work and inability to listen to people trying to improve the system.
Going to leave this place really soon. No point in working somewhere where you are expected to show up for 8 hours, irrespective of whether you even switch on your laptop.
Wish me luck folks.3 -
My boss still thinks that resizing his browser is equivalent to mobile testing, and his designs are desktop only and says "just have everything stack on top of each other"
below is how I feel.
* {
position: absolute;
Z-index: 1;
Top: 0;
Left:0;
}3 -
Teaching my girlfriend how to code and she’s got to the indexes start at 0 crisis.
Just to make her feel better, anyone else remember their indexes start at 0 crisis? 😅
So far the convo is “why does count start at 1 and index start at 0?!? Developers can’t fucking count”35 -
When defining a range, let's say from 1 to 3, I expect:
[1, 2, 3]
Yet most range functions I come across, e.g. lodash, will do:
_.range(1, 3)
=> [1, 2]
And their definition will say: "Creates an array of numbers ... progressing from start up to, but not including, end."
Yet why the fuck not including end? What don't I understand about the concept of a frigging range that you won't include the end?
The only thing I can come up with that's this is related to the array's-indexes-start with-0-thing and someone did not want to substract `-1` when preparing a for loop over an 10 items array with range(0,10), even though they do not want a range of 0 to 10, they want a range from 0 to 9. (And they should not use a for loop here to begin with but a foreach construct anyway.)
So the length of your array does not match the final index of your array.
Bohhoo.
Yet now we can have ranges with very weird steps, and now you always have to consider your proper maximum, leading to code like:
var start = 10;
var max = 50;
var step = 10;
_.range(start, max + step, step)
=> [10, 20, 30, 40, 50]
and during code review this would scream "bug!" in my face.
And it's not only lodash doing that, but also python and dart.
Except php. Php's range is inclusive. Good job php.4 -
Is obsidian a fucking joke?
Seriously, is it a joke? Why would you ever care so much about indexing literally everything, if the entire thing crashes and/or takes >5min to LITERALLY just open the fucking directory and/or (so help you) if that directory is full of projects/repos or whatever the fuck and the total size of said directory is like >5GB.
WHY THE FUCK WOULD YOU INDEX EVERYTHING? -- "Ohh obsidian's not supposed to be used a fully fledged IDE, ohh obsidian should just handle MD files and normal sized projects, ohh the plugins and ease-of-use" -- Fuck.
There's no fucking real reason to index everything, BY DEFAULT. You open a directory with Obsidian? Doesn't matter, it's 1 byte, it's 100GB, you get indexed. Deal with it. It will use LITERALLY every resource your computer has. I'm surprised it doesn't go galaxy brain and ping if any other computers/devices are on the network and then attempt to connect and use their hardware (obsidian can be like a node!).
How shit can you be at understanding basic data structures and algorithms, where you just revert to based google-chrome brain and let the FUCKING TEXT EDITOR -- OBSIDIAN IS A FUCKING TEXT EDITOR HOLY SHIT -- hog all conceivable memory.
I swear to <some-deity> if anyone fucking says "Ohhhhhhhh actually, it's not a text editor, it has plugins and features and shit, it does all dis cool stff", OR, "Ohhhhh actually, obsidian indexes things for a very specific/rationale/apt/pragmatic/academic reason" OR "ohhhh, I have 100 iphones, 1000 ipads and a trillion desktop computers that each have 256GB of memory, why you hating on obsidian?" then go kick rocks. The fucking lot of you. Are you fucking kidding me.8 -
Two big moments today:
1. Holy hell, how did I ever get on without a proper debugger? Was debugging some old code by eye (following along and keeping track mentally, of what the variables should be and what each step did). That didn't work because the code isn't intuitive. Tried the print() method, old reliable as it were. Kinda worked but didn't give me enough fine-grain control.
Bit the bullet and installed Wing IDE for python. And bam, it hit me. How did I ever live without step-through, and breakpoints before now?
2. Remember that non-sieve prime generator I wrote a while back? (well maybe some of you do). The one that generated quasi lucas carmichael (QLC) numbers? Well thats what I managed to debug. I figured out why it wasn't working. Last time I released it, I included two core methods, genprimes() and nextPrime(). The first generates a list of primes accurately, up to some n, and only needs a small handful of QLC numbers filtered out after the fact (because the set of primes generated and the set of QLC numbers overlap. Well I think they call it an embedding, as in QLC is included in the series generated by genprimes, but not the converse, but I digress).
nextPrime() was supposed to take any arbitrary n above zero, and accurately return the nearest prime number above the argument. But for some reason when it started, it would return 2,3,5,6...but genprimes() would work fine for some reason.
So genprimes loops over an index, i, and tests it for primality. It begins by entering the loop, and doing "result = gffi(i)".
This calls into something a function that runs four tests on the argument passed to it. I won't go into detail here about what those are because I don't even remember how I came up with them (I'll make a separate post when the code is fully fixed).
If the number fails any of these tests then gffi would just return the value of i that was passed to it, unaltered. Otherwise, if it did pass all of them, it would return i+1.
And once back in genPrimes() we would check if the variable 'result' was greater than the loop index. And if it was, then it was either prime (comparatively plentiful) or a QLC number (comparatively rare)--these two types and no others.
nextPrime() was only taking n, and didn't have this index to compare to, so the prior steps in genprimes were acting as a filter that nextPrime() didn't have, while internally gffi() was returning not only primes, and QLCs, but also plenty of composite numbers.
Now *why* that last step in genPrimes() was filtering out all the composites, idk.
But now that I understand whats going on I can fix it and hypothetically it should be possible to enter a positive n of any size, and without additional primality checks (such as is done with sieves, where you have to check off multiples of n), get the nearest prime numbers. Of course I'm not familiar enough with prime number generation to know if thats an achievement or worthwhile mentioning, so if anyone *is* familiar, and how something like that holds up compared to other linear generators (O(n)?), I'd be interested to hear about it.
I also am working on filtering out the intersection of the sets (QLC numbers), which I'm pretty sure I figured out how to incorporate into the prime generator itself.
I also think it may be possible to generator primes even faster, using the carmichael numbers or related set--or even derive a function that maps one set of upper-and-lower bounds around a semiprime, and map those same bounds to carmichael numbers that act as the upper and lower bound numbers on the factors of a semiprime.
Meanwhile I'm also looking into testing the prime generator on a larger set of numbers (to make sure it doesn't fail at large values of n) and so I'm looking for more computing power if anyone has it on hand, or is willing to test it at sufficiently large bit lengths (512, 1024, etc).
Lastly, the earlier work I posted (linked below), I realized could be applied with ECM to greatly reduce the smallest factor of a large number.
If ECM, being one of the best methods available, only handles 50-60 digit numbers, & your factors are 70+ digits, then being able to transform your semiprime product into another product tree thats non-semiprime, with factors that ARE in range of ECM, and which *does* contain either of the original factors, means products that *were not* formally factorable by ECM, *could* be now.
That wouldn't have been possible though withput enormous help from many others such as hitko who took the time to explain the solution was a form of modular exponentiation, Fast-Nop who contributed on other threads, Voxera who did as well, and support from Scor in particular, and many others.
Thank you all. And more to come.
Links mentioned (because DR wouldn't accept them as they were):
https://pastebin.com/MWechZj912 -
Reading about Lua, see this:
"Lua arrays are 1-based: the first index is 1 rather than 0 as it is for many other programming languages (though an explicit index of 0 is allowed)."
*close tab*2 -
Probably the smallest, most inconsequential rant I'll write here, but - is it that hard for websites with a main search input or login field on their landing page to put its tab index to 1?
I mean, my mouse is all the way over *there* ~ -
It's looks like this weeks "Weekly Group Rant" is trolling all the developers who have side projects that never finished.
1. CMS like WordPress
2. MVC framework like Laravel
3. Android App like Tinder
and... list goes on.
#1, #2 and few others are still stuck on index page.2 -
Friend getting 'index out of range' error so I wonder over and have a look 🤔
Array_variable[-1]
🙈🙉😩
No words were needed!2 -
Arguing with a guy in a PR that substring(0,1) (first index inclusive, last exclusive) is equal to charAt(0), whereas he seems to think it's charAt(1).
My patience is wearing thin, but I now feel the need to check I'm not the moron here - someone please confirm if I'm the idiot here or not?20 -
I just found something cool on accident.
Assuming you start the fibonacci sequence on 2, you can find any of the fibonacci numbers for a given index with a simple trick:
Let Phi = 1.618
Let your index n = n+1
Floor(phi**(n+1+0.328))+1 = the fibonacci number at index n in the sequence.
This probably breaks down past some point but its nifty.21 -
Someone created a 0-followers private Twitter account and posted something to try out the new views count feature.
It raked dozens of views in a couple hours.
HOW?!?
Source: https://twitter.com/briggityboppity...
It looks like a funny data reverse-engineering exercise, so let's try and figure out what is going on.
Hypothesis 1) it is the OP's own views.
Reasonable, but unlikely if what OP says about not checking it for hours is true.
H2) It's some background job in OP's device that is refreshing OP's own latest tweets, so even without human interaction technically H1 is true. It would be some really shoddy engineering to count eye-less page views, but that's also what managers would demand.
H3) it's some internal Twitter automated function like back up, replication, indexing and word count.
See H2, it would be even dumber to count that as page views.
H4) it's some internal human reviewing for a keyword that could be associated with porn (in this case, "butts"). Really? dozens of humans to review a no-impact single post? They would have to employ hundreds of thousands of reviewers.
H5) it's some page-loading shit, like thousands of similar tweets get stored in the same index hash page and end up counting as a view in all of them every time someone loads the index page. It would be like counting every hit in the namenode as a hit in every data asset in it's Hadoop partition, or every hit in a storage block as a hit in each of it's files.
Duuuumb and kinda like H3.
H6) page views are just a fraud to scam investors. Maybe it's a "most Blockchain transactions are fake" situation, maybe it's a "views get more engagement if you don't think a lot about it" situation, maybe it's a "we don't use the metric system to count page views" situation.
All of them are very dumb.
Other hypothesis or opinions?10 -
I need to delay execution of code in a for loop, how do I do that?
PHP: Sleep(3000)
Javascript:
const waitFor = (ms) => new Promise(r => setTimeout(r, ms))
const asyncForEach = (array, callback) => {
for (let index = 0; index < array.length; index++) {
await callback(array[index], index, array)
}
}
const start = async () => {
await asyncForEach([1, 2, 3], async (num) => {
await waitFor(50)
console.log(num)
})
console.log('Done')
}
start()
Fuck you Javascript17 -
Found a clever little algorithm for computing the product of all primes between n-m without recomputing them.
We'll start with the product of all primes up to some n.
so [2, 2*3, 2*3*5, 2*3*5*,7..] etc
prods = []
i = 0
total = 1
while i < 100:
....total = total*primes[i]
....prods.append(total)
....i = i + 1
Terrible variable names, can't be arsed at the moment.
The result is a list with the values
2, 6, 30, 210, 2310, 30030, etc.
Now assume you have two factors,with indexes i, and j, where j>i
You can calculate the gap between the two corresponding primes easily.
A gap is defined at the product of all primes that fall between the prime indexes i and j.
To calculate the gap between any two primes, merely look up their index, and then do..
prods[j-1]/prods[i]
That is the product of all primes between the J'th prime and the I'th prime
To get the product of all primes *under* i, you can simply look it up like so:
prods[i-1]
Incidentally, finding a number n that is equivalent to (prods[j+i]/prods[j-i]) for any *possible* value of j and i (regardless of whether you precomputed n from the list generator for prods, or simply iterated n=n+1 fashion), is equivalent to finding an algorithm for generating all prime numbers under n.
Hypothetically you could pick a number N out of a hat, thats a thousand digits long, and it happens to be the product of all primes underneath it.
You could then start generating primes by doing
i = 3
while i < N:
....if (N/k)%1 == 0:
........factors.append(N/k)
....i=i+1
The only caveat is that there should be more false solutions as real ones. In otherwords theres no telling if you found a solution N corresponding to some value of (prods[j+i]/prods[j-i]) without testing the primality of *all* values of k under N.13 -
Am I the only developer in existence who's ever dealt with Git on Windows? What a colossal train wreck.
1. Authentication. Since there is no ssh key/git url support on Windows, you have to retype your git credentials Every Stinking Time you push. I thought Git Credential Manager was supposed to save your credentials? And this was impossible over SSH (see below). The previous developer had used an http git URL with his username and password baked in for authentication. I thought that was a horrific idea so I eventually figured out how to use a Bitbucket App password.
2. Permissions errors
In order to commit and push updates, I have to run Git for Windows as Administrator.
3. No SSH for easy git access
Here's where I confess that this is a Windows Server machine running as some form of production. Please don't slaughter me! I am not the server admin.
So, I convinced the server guy to find and install some sort of ssh service for Windows just for the off times we have to make a hot fix in production. (Don't ask, but more common than it should be.)
Sadly, this ssh access is totally useless as the git colors are all messed up, the line wrap length and window size are just weird (seems about 60 characters wide by 25 lines tall) and worse of all I can't commit/push in git via ssh because Permissions. Extremely aggravating.
4. Git on Windows hangs open and locks the index file
Finally, we manage to have Git for Windows hang quite frequently and lock the git index file, meaning that we can't do anything in git (commit, push, pull) without manually quitting these processes from task manager, then browsing to the directory and deleting the .git/index.lock file.
Putting this all together, here's the process for a pull on this production server:
Launch a VNC session to the server. Close multiple popups from different services. Ask Windows to please not "restart to install updates". Launch git for Windows. Run a git pull. If the commits to be pulled involve deleting files, the pull will fail with a permissions error. Realize you forgot to launch as Administrator. Depending on how many files were deleted in the last update, you may need to quit the application and force close the process rather than answer "n" for every "would you like to try again?" file. Relaunch Git as Administrator. Run Git pull. Finally everything works.
At this point, I'd be grateful for any tips, appreciate any sympathy, and understand any hatred. Windows Server is bad. Git on Windows is bad.10 -
My favorate bookmarklet (ES6 only):
javascript:(()=>{var b,c,a=document,f="onreadystatechange",h="https://rawgithub.com/smore-inc/...=(p,q)=>{p.readyState?p[f]=()=>{"loaded"!=p.readyState&&"complete"!=p.readyState||(p[f]=null,q&&q())}:p.onload=function(){q&&q()}},k=()=>{clippy.load("Clippy",p=>{$(".clippy").css("position","fixed"),$(".clippy").css("z-index",1e3),p.show(),p.moveTo(100,100)})},m=()=>{(c=a.createElement("script")).src=h+"clippy.js",a.body.appendChild(c);var p=a.createElement("link");p.rel="stylesheet",p.type="text/css",p.media="all",p.href=h+"clippy.css",a.getElementsByTagName("head")[0].appendChild(p)};"undefined"==typeof jQuery?(b=a.createElement("script"),b.src="https://ajax.googleapis.com/ajax/...,j(b,()=>{m(),j(c,k)})):"undefined"==typeof clippy?(m(),j(c,k)):k()})();14 -
This always gets me:
Developers complaining that their 4 year old / cheap ass computer is slow.
Get. A. New. One.
It's not that hard.
Here, let me do one for you:
https://computeruniverse.net/en/...
I just went to a site that delivers across Europe, and selected a cheap laptop with a decent CPU and SSD. Short on RAM, sure, and without a Windows License. But you can buy RAM for an additional 50$, and that brings you to a total of 550€, delivery included. And it will WORK. And it will be fast.
It's too expensive?
No, not exactly. Wherever you are in the world, if you can code decently, good enough to have the right to complain about development tools, you are eligible to at least 10$ per hour income as a freelancer across the globe. I've had such opportunities offered to me by many organizations, especially non-profit ones that need cheap employees. I actually was offered more but let's stick to 10$ per hour.
So that's 1600$ per month. Enough to buy 3 such laptops. Oh, taxes, I forgot. So you get 2 laptops. Wait! You need food and everything else. Well if you're in a country where that offer actually makes sense, then it's likely that you can live off of 400$ per month quite well. Maybe 800$ if you need to pay rent.
So that's roughly 1 month of work for a laptop that will make you not waste time on waiting for stuff.
Sweet! 1 Month! What does it get me?
Well assuming that you have no laptop, it gets you A JOB that pays you 1600$ per month.
But if you DO have a laptop, you can sell it for cheap, and benefit from the following:
1. Boot-up time from 30-60 seconds to 10 seconds.
2. Installing software - from 1 minute to 10 seconds.
3. Opening a browser - from 10 seconds to 1 second.
4. Opening an advanced text editor (Atom, VS.Code) - from 10 seconds to 1 second.
5. Searching for a file on your entire hard drive - from 1 hour to 2 minutes.
....
You get the point. Waiting is reduced by several times.
So how much do you really wait when coding?
Well are you compiling? Are you opening a new project and the IDE needs to re-index the files? Are you opening programs like a terminal emulator, browser and such? Are you using virtual machines for dev environments?
Well all of these processes become several times faster. Depending on how often you do it, you'll be saving yourself from 1 hour per day to upto 4 hours per day (my case, where a HDD would be just out of the question).
How much is that time worth? At least 10$ per day. If you're working for 20 days per month, 240 days per year, that's a total of 2400$. And for the life time of that crappy laptop of 2 years, that's 4800$ saved. And that's with hugely conservative numbers. Nobody pays 10$ per hour any more, except if you've just started in the industry. I know because I've been there.
Please, for all that's sacred to you, justify right here, right now, HOW THE FUCK can you not afford to get that 8GB of RAM, that cheap ass SSD for 100$, or even a brand new laptop (hey! it's even portable and has FHD graphics on it!) for 550$.
That's why every time I hear someone who is a professional developer complain that they don't have money for a decent machine, I have to ask: why the fuck are you wasting yours and everyone else's time?!10 -
-Rant-
How do you (not) secure your Rest based web service?
1. Chain it to shady organic authentication system built by a hoard of monkeys high on Tequila.
2. have secret keys that get copy pasted into config flat files, and index them on your code search engine.
3. make the onboarding extremely platform specific that you need 500 environment variables, 50 scripts, 5 fancy device presses and a tap dance to make a GET call to the service.
4. fish through 500 rotating log files that the authentication system generates for each API call made.
5. Leave traces all over the host so if you have to start over, you should sudo rm -rf / and set fire to your computer. -
How come your fucking pseudo code is far more complex than python code? You're a fucking university teacher FFS, ALSO TELL ME WHY THE FUCK YOU START YOUR INDEX LIST BY 1.1
-
LXC, no doubt.
I mean to be fair, LXC is an amazing container runtime once you manage to set it up. But setting it up is the hard bit. Starting off with LXC 2.x, it was a nightmare to find out how to get things like the storage backends working. But with ZFS it ended up being alright. Find some arcane values to stick in the /etc/lxc/default.conf to use ZFS as the backend and then the default storage location on those ZFS pools (I'll get back to that later), and it worked alright. Again, once it works it's great, but setting it up and finding the right configuration keys is absolute hell.
So, LXC 2.x for a while and a few months ago I finally ended up upgrading to 3.x. Every single configuration key changed. Every single one of them, and that's why I had to 1) learn LXC all over again, and 2) redeploy each and every one of my containers. That process is still not entirely completed. ZFS backend was once again a dive into arcane configuration keys found on forums and whatnot. Yeah.. official documentation has none of it. Oh and in 3.x you now also have to dodge the torrent of "just use LXD m8" messages. Yeah, very helpful when LXD is also the ONLY way to reasonably configure it. Absolutely beautiful. Oh and as far as the ZFS default storage location goes (such as ssd/lxc/ct)? Yeah forget about it. There's no configuration option for it anymore, and the default is "lxc". In ZFS lingo that means that LXC has the audacity to demand a whole pool for itself. No. No you don't deserve a whole pool for yourself. But hey at least you can define the storage location to use in the lxc-create command! Every single time you have to define it in lxc-create. I abstracted it away into my own LXC interface, so no big deal really. But yeah... That could absolutely be better. And in 2.x it was actually better.
Oh and btrfs, the filesystem I'd like to use on low memory systems because ZFS' ARC is too much on such systems? Yeah forget about it. I still have no idea how to do it. Thank you LXC and its amazing documentation!
And if you want the icing on the cake for LXC's terrible documentation, see their repo's index page at https://github.com/lxc/lxc/.... Yeah, it's totally still at 2.x... That's how well they maintain that. Even Debian has 3.x now. And if you look at the branches, you'll find that even 4.x is already available and considered stable. -
Please stop, stop now...
(BTW, assignment statement was on one line, I added breaks just so I could fit it into a screenshot)
If the text is too blurry:
int index;
string fileNameWithoutExt;
additionalData.MailImportConfigCode = (fileNameWithoutExt = schema.Remove(schema.LastIndexOf('.'))).Substring((index = fileNameWithoutExt.LastIndexOf('.') + 1), fileNameWithoutExt.Length - index);
}2 -
Reading OpenSource lib that write in TypeScript is a nightmare
WTF:
export function concatMap<T, I, R>(
project: (value: T, index: number) => ObservableInput<I>,
resultSelector?: (outerValue: T, innerValue: I, outerIndex: number, innerIndex: number) => R
): OperatorFunction<T, I|R> {
return mergeMap(project, resultSelector, 1);
}
That is just fucking definition, no execution code inside1 -
Okay... I need to confess.
I actually like the idea of counting arrays from 1 like it is in some languages.
It makes code cleaner.
Think about it.
You would never need to subtract 1 from count/size/length or add 1 for things like the month in javascript because the first item would be at index 1 and many many errors wouldn't be happening because we don't need to force our minds to think another way. I learned counting from 1 after I learned to walk so it's the most natural thing to do. Just because the software/hardware below our language works that way doesn't mean we can not abstract this behavior away. What's your opinion about this? Am I wrong?12 -
I don't like when client decide which tech use in the project. I got some weird tech request like:
1. Move existing database from postgresql to Hadoop because hadoop is Big Data (is kinda move from amazon rds to amazon s3 just why? have you index, cluster your postgresql table?)
2. Move from mysql to postgresql because mysql cause deadlock (maybe their previous developer just fucking moron)
In this situation we just explain why we don't use that and propose alternative solution. If they insist with their solution either ignore it or decide not continuing the project.5 -
Heres some research into a new LLM architecture I recently built and have had actual success with.
The idea is simple, you do the standard thing of generating random vectors for your dictionary of tokens, we'll call these numbers your 'weights'. Then, for whatever sentence you want to use as input, you generate a context embedding by looking up those tokens, and putting them into a list.
Next, you do the same for the output you want to map to, lets call it the decoder embedding.
You then loop, and generate a 'noise embedding', for each vector or individual token in the context embedding, you then subtract that token's noise value from that token's embedding value or specific weight.
You find the weight index in the weight dictionary (one entry per word or token in your token dictionary) thats closest to this embedding. You use a version of cuckoo hashing where similar values are stored near each other, and the canonical weight values are actually the key of each key:value pair in your token dictionary. When doing this you align all random numbered keys in the dictionary (a uniform sample from 0 to 1), and look at hamming distance between the context embedding+noise embedding (called the encoder embedding) versus the canonical keys, with each digit from left to right being penalized by some factor f (because numbers further left are larger magnitudes), and then penalize or reward based on the numeric closeness of any given individual digit of the encoder embedding at the same index of any given weight i.
You then substitute the canonical weight in place of this encoder embedding, look up that weights index in my earliest version, and then use that index to lookup the word|token in the token dictionary and compare it to the word at the current index of the training output to match against.
Of course by switching to the hash version the lookup is significantly faster, but I digress.
That introduces a problem.
If each input token matches one output token how do we get variable length outputs, how do we do n-to-m mappings of input and output?
One of the things I explored was using pseudo-markovian processes, where theres one node, A, with two links to itself, B, and C.
B is a transition matrix, and A holds its own state. At any given timestep, A may use either the default transition matrix (training data encoder embeddings) with B, or it may generate new ones, using C and a context window of A's prior states.
C can be used to modify A, or it can be used to as a noise embedding to modify B.
A can take on the state of both A and C or A and B. In fact we do both, and measure which is closest to the correct output during training.
What this *doesn't* do is give us variable length encodings or decodings.
So I thought a while and said, if we're using noise embeddings, why can't we use multiple?
And if we're doing multiple, what if we used a middle layer, lets call it the 'key', and took its mean
over *many* training examples, and used it to map from the variance of an input (query) to the variance and mean of
a training or inference output (value).
But how does that tell us when to stop or continue generating tokens for the output?
Posted on pastebin if you want to read the whole thing (DR wouldn't post for some reason).
In any case I wasn't sure if I was dreaming or if I was off in left field, so I went and built the damn thing, the autoencoder part, wasn't even sure I could, but I did, and it just works. I'm still scratching my head.
https://pastebin.com/xAHRhmfH33 -
There is no such thing as a "Random Error".
Unless you are using rand() to select a random index from an array. And you forget to add - 1 to the generated index.
Now that is one hell of a random error! -
Had to do the FizzBuzz test in PHP. Proceeded to creat a range(1, 100) before the for loop instead of using the loops own index #. Worst part is I realized what I had done in the parking lot after I left. They asked me multiple times how I could optimize the code too lol.
-
Up all damn night making the script work.
Wrote a non-sieve prime generator.
Thing kept outputting one or two numbers that weren't prime, related to something called carmichael numbers.
Any case got it to work, god damn was it a slog though.
Generates next and previous primes pretty reliably regardless of the size of the number
(haven't gone over 31 bit because I haven't had a chance to implement decimal for this).
Don't know if the sieve is the only reliable way to do it. This seems to do it without a hitch, and doesn't seem to use a lot of memory. Don't have to constantly return to a lookup table of small factors or their multiple either.
Technically it generates the primes out of the integers, and not the other way around.
Things 0.01-0.02th of a second per prime up to around the 100 million mark, and then it gets into the 0.15-1second range per generation.
At around primes of a couple billion, its averaging about 1 second per bit to calculate 1. whether the number is prime or not, 2. what the next or last immediate prime is. Although I'm sure theres some optimization or improvement here.
Seems reliable but obviously I don't have the resources to check it beyond the first 20k primes I confirmed.
From what I can see it didn't drop any primes, and it didn't include any errant non-primes.
Codes here:
https://pastebin.com/raw/57j3mHsN
Your gotos should be nextPrime(), lastPrime(), isPrime, genPrimes(up to but not including some N), and genNPrimes(), which generates x amount of primes for you.
Speed limit definitely seems to top out at 1 second per bit for a prime once the code is in the billions, but I don't know if thats the ceiling, again, because decimal needs implemented.
I think the core method, in calcY (terrible name, I know) could probably be optimized in some clever way if its given an adjacent prime, and what parameters were used. Theres probably some pattern I'm not seeing, but eh.
I'm also wondering if I can't use those fancy aberrations, 'carmichael numbers' or whatever the hell they are, to calculate some sort of offset, and by doing so, figure out a given primes index.
And all my brain says is "sleep"
But family wants me to hang out, and I have to go talk a manager at home depot into an interview, because wanting to program for a living, and actually getting someone to give you the time of day are two different things.1 -
Tldr:
Can't fucking figure out why I'm the only one who can't solve a DP problem in code, when me and friends use the same idea and no one knows why only mine doesn't work...
We are given a task to solve a problem using DP. My friends write their code with the same idea as a solution. Copying the code is not allowed. I follow the same idea but my code won't work. Others look into it, in case they find errors. They can't find any.
The problem (for reference):
Given a fixed list of int's a = (a_1,a_2,...,a_n) and b = (b_1,b_2,...,b_n), a_i and b_i >= 0, a.length == b.length
We want to maximize the sum of a_i's chosen. Every a_i is connected with the b_i at the same index. b_i tells us how many indexes of a we have to skip if choosing the corresponding a_i, so list index of b_i + b_i's value + 1 would be the position of the next a_i available.
The idea:
Create a new list c with same length as a (or b).
Begin at the end of c and save a_n at the same position in c. Iterate backwards through c and at each position add the max value of all previously saved values of c (with regards to the b_i-restriction) with the current a_i, else a_i + 0 if the b_i-resctriction goes beyond the list.
Return the max value of c.
How does that not work for me but for the others?? Funny enough, a few given samples work with my code. I'm questioning my coding ability...7 -
how to describe the feeling when you started using sql and you had to get the first element from a table via jdbc...
you, obviously, think "oh, the first index is 0, every languages start at 0, so let's take the content at 0!!!" but the ide returns you "0 < 1"
so you don't understand, you stare the code for 20 mins, you start crying, and then you realize sql starts counting from 1 because it pretends to be cool BUT IT DOESN'T
I hate you, sql.5 -
This is probably a standard pattern/algorithm, but I feel pretty good about myself figuring this out.
I was doing a programming challenge and found myself with 2 lists of integer points (x,y). I needed to see where the points converged and identify those locations. Of course I started with a brute force approach and did nested loops to find these locations. This was taking WAY TOO LONG. These lists were 200K each. So checking with naive looping is 200K * 200K operations. Which is a lot.
Then I thought, well I am checking equality, so I will create a third map. The index to the map will be the point, and the data will be an integer. I then go through each list once incrementing the integer for each point that exists in each respective list. Any point with a value greater than 1 is a point convergence.
Like I said, this has got to be a standard thing, so can someone tell me what algorithm this is? I am not sure how to search for this.
I am fuzzy on complexity notation but I think the complexity started at n^2 and was reduced to n. Each list is cycled over once.4 -
Index, currIndex and i are all -1.
The real index is a postfix on a string in another class passed through several layers of reflection and delegates.
Tomorrow will be better. Tomorrow will be better. Tommorow will be better.... -
Math question time!
Okay so I had this idea and I'm looking for anyone who has a better grasp of math than me.
What if instead of searching for prime factors we searched for a number above p?
One with a certain special property. BEAR WITH ME. I know I make these posts a lot and I'm a bit of a shitposter, but I'm being genuine here.
Take this cherry picked number, 697 for example.
It's factors are 17, and 41. It's trivial but just for demonstration.
If we represented it's factors as a bit string, where each bit represents the index that factor occurs at in a list of primes, it looks like this
1000001000000
When converted back to an integer that number becomes 4160, which we will call f.
And if we do 4160/(2**n) until the result returns
a fractional component, then N in this case will be 7.
And 7 is the index of our lowest factor 17 (lets call it A, and our highest factor we'll call B) in our primes list.
So the problem is changed from finding a factorization of p, to finding an algorithm that allows you to convert p into f. Once you have f it's a matter of converting it to binary, looking up the indexes of all bits set to 1, and finding the values of those indexes in the list of primes.
I'm working on doing that and if anyone has any insights I'm all ears.9 -
I feel a bit confused rigt now.
Did i misunderstand something with this random little excercise?
" Write a Java program to find the index of a value in a sorted array. If the value does not find return the index where it would be if it were inserted in order. Example:
[1, 2, 4, 5, 6] 5(target) -> 3(index)
[1, 2, 4, 5, 6] 0(target) -> 0(index)
[1, 2, 4, 5, 6] 7(target) -> 5(index) "
Here is what i did lol:
https://gist.github.com/laim2003/...
And here is the official solution:
https://gist.github.com/laim2003/...
Their solution seems a bit unnecessary complicated lol or am i wrong1 -
So a couple of months ago I had some stability issues which seems to have caused Baloo go crazy and create an 1.7 exabyte index file. It was apparently mainly empty as zfs compressed it down to 535MB
Today I spent some time trying to reproduce the "issue" and turns out that wasn't that hard.
So this little program running on FreeBSD with a compressed (lz4) zfs dataset creates an 1.9 Exabyte large file, nicely compressed down to 45KB :)
#include <stdio.h>
#include <fcntl.h>
#include <unistd.h>
#include <sys/limits.h>
int main(int argc, char** argv) {
int fd = open("bigfile.lge", O_RDWR|O_CREAT, 0644);
for (int i = 0 ; i < 1000000000; i++) {
lseek(fd, INT_MAX, SEEK_CUR);
}
write(fd, " ",1);
close(fd);
}3 -
Heres the initial upgraded number fingerprinter I talked about in the past and some results and an explanation below.
Note that these are wide black images on ibb, so they appear as a tall thin strip near the top of ibb as if they're part of the website. They practically blend in. Right click the blackstrip and hit 'view image' and then zoom in.
https://ibb.co/26JmZXB
https://ibb.co/LpJpggq
https://ibb.co/Jt2Hsgt
https://ibb.co/hcxrFfV
https://ibb.co/BKZNzng
https://ibb.co/L6BtXZ4
https://ibb.co/yVHZNq4
https://ibb.co/tQXS8Hr
https://paste.ofcode.org/an4LcpkaKr...
Hastebin wouldn't save for some reason so paste.ofcode.org it is.
Not much to look at, but I was thinking I'd maybe mark the columns where gaps occur and do some statistical tests like finding the stds of the gaps, density, etc. The type test I wrote categorizes products into 11 different types, based on the value of a subset of variables taken from a vector of a couple hundred variables but I didn't want to include all that mess of code. And I was thinking of maybe running this fingerprinter on a per type basis, set to repeat, and looking for matching indexs (pixels) to see what products have in common per type.
Or maybe using them to train a classifier of some sort.
Each fingerprint of a product shares something like 16-20% of indexes with it's factors, so I'm thinking thats an avenue to explore.
What the fingerprinter does is better explained by the subfunction findAb.
The code contains a comment explaining this, but basically the function destructures a number into a series of division and subtractions, and makes a note of how many divisions in a 'run'.
Typically this is for numbers divisible by 2.
So a number like 35 might look like this, when done
p = 35
((((p-1)/2)-1)/2/2/2/2)-1
And we'd represent that as
ab(w, x, y, z)
Where w is the starting value 35 in this case,
x is the number to divide by at each step, y is the adjustment (how much to subtract by when we encounter a number not divisible by x), and z is a string or vector of our results
which looks something like
ab(35, 2, 1, [1, 4])
Why [1,4]
because we were only able to divide by 2 once, before having to subtract 1, and repeat the process. And then we had a run of 4 divisions.
And for the fingerprinter, we do this for each prime under our number p, the list returned becoming another row in our fingerprint. And then that gets converted into an image.
And again, what I find interesting is that
unknown factors of products appear to share many of these same indexes.
What I might do is for, each individual run of Ab, I might have some sort of indicator for when *another* factor is present in the current factor list for each index. So I might ask, at the given step, is the current result (derived from p), divisible by 2 *and* say, 3? If so, mark it.
And then when I run this through the fingerprinter itself, all those pixels might get marked by a different color, say, make them blue, or vary their intensity based on the number of factors present, I don't know. Whatever helps the untrained eye to pick up on leads, clues, and patterns.
If it doesn't make sense, take another look at the example:
((((p-1)/2)-1)/2/2/2/2)-1
This is semi-unique to each product. After the fact, you can remove the variable itself, and keep just the structure in question, replacing the first variable with some other number, and you get to see what pops out the otherside.
If it helps, you can think of the structure surrounding our variable p as the 'electron shell', the '-1's as bandgaps, and the runs of '2's as orbitals, with the variable at the center acting as the 'nucleus', with the factors of that nucleus acting as the protons and neutrons, or nougaty center lol.
Anyway I just wanted to share todays flavor of insanity on the off chance someone might enjoy reading it.1 -
DailyCodingProblem: #1
Given an array of integers, return a new array such that each element at index i of the new array is the product of all the numbers in the original array except the one at i.
For example, if our input was [1, 2, 3, 4, 5], the expected output would be [120, 60, 40, 30, 24]. If our input was [3, 2, 1], the expected output would be [2, 3, 6].
this is my quickly solution in php:
$input_array = [1, 2, 3, 4, 5];
echo('INPUT ARRAY:');
print_r($input_array);
echo("<br/>");
foreach($input_array as $key => $value){
$works_input_array = $input_array;
unset($works_input_array[$key]);
$result[] = array_product($works_input_array);
}
echo('OUTPUT ARRAY:');
print_r($result);
outpout:
INPUT ARRAY:Array ( [0] => 3 [1] => 2 [2] => 1 )
OUTPUT ARRAY:Array ( [0] => 2 [1] => 3 [2] => 6 )5 -
nothing new, just another rant about php...
php, PHP, Php, whatever is written, wherever is piled, I hate this thing, in every stack.
stuff that works only according how php itself is compiled, globals superglobals and turbo-globals everywhere, == is not transitive, comparisons are non-deterministic, ?: is freaking left associative, utility functions that returns sometimes -1, sometimes null, sometimes are void, each with different style of usage and naming, lowercase/under_score/camelCase/PascalCase, numbers are 32bit on 32bit cpus and 64bit on 64bit cpus, a ton of silent failing stuff that doesn't warn you, references are actually aliases, nothing has a determined type except references, abuse of mega-global static vars and funcs, you can cast to int in a language where int doesn't even exists, 25236 ways to import/require/include for every different subcase, @ operator, :: parsed to T_PAAMAYIM_NEKUDOTAYIM for no reason in stack traces, you don't know who can throw stuff, fatal errors are sometimes catchable according to nobody knows, closed-over vars are passed as functions unless you use &, functions calls that don't match args signature don't fail, classes are not object and you can refer them only by string name, builtin underlying types cannot be wrapped, subclasses can't override parents' private methods, no overload for equality or ordering, -1 is a valid index for array and doesn't fail, funcs are not data nor objects when clojures instead are objects, there's no way to distinguish between a random string and a function 'reference', php.ini, documentation with comments and flame wars on the side, becomes case sensitive/insensitive according to the filesystem when line break instead is determined according to php.ini, it's freaking sloooooow...
enough. i'm tired of this crap.
it's almost weekend! 🍻1 -
Does anyone know the reason behind why JavaScript Arrays start with an index of 0 instead of 1? Or why the .length property starts at 1 instead of 0.11
-
String[] hidingPlaces = new String[1000000];
hidingPlaces[Math.rand()*1000000] = "bug";
findBug(hidingPlaces);
public int findBug(String [] are) {
// todo: return index of bug with complexity < O(1)
} -
public function index(Request $request): array
{
$parameters = json_decode($request->get('lazyEvent'), true);
if (isset($parameters['page'])) {
$page = $parameters['page'];
} else {
$page = 0;
}
$queryBuilder = DB::table('companies')
->leftJoin('company_contact', 'company_contact.company_id', '=', 'companies.id')
->groupBy('companies.id')
;
if (isset($parameters['filters']['name'])) {
$queryBuilder
->where('name', 'like', '%' . $parameters['filters']['name']['value'] . '%')
;
}
$total = $queryBuilder
->count()
;
$companies = $queryBuilder
->select('companies.id', 'name', 'email', 'phone', DB::raw("COUNT('company_contact.id') AS contact_count"))
->orderBy('name')
->offset($page * $parameters['rows'])
->limit($parameters['rows'])
->get();
return ['companies' => $companies, 'totalRecords' => $total];
}
what is this shit? I get $total 1 when in reality is $companies count is 51
I am thinking avout writing whole sql as raw because I cannto get fucking count correctly21 -
Behold, the code submitted by user Hecker:
(Quote) > writing html like a pro:
from typing import List
class MissmatchedRowsAndCols(Exception):
pass
class HtmlTableBuilder:
classes: List[str] = []
identifier: str = ""
rows: List[str] = []
cols: List[list] = []
def add_row(self, name: str):
self.rows.append(name)
return self
def add_col(self, fields: list):
if len(self.rows) != len(fields):
raise MissmatchedRowsAndCols(
"The given fields are not matched 1:1 with the rows.")
self.cols.append(fields)
return self
def build(self, indent: int = 4) -> str:
html = "<table border=\"2px\""
if len(self.identifier) > 0:
html += ' id="' + self.identifier + '"'
if len(self.classes) > 0:
html += ' class"' + (" ".join(self.classes)) + '"'
html += ">\n"
html += (" "*indent) + "<thead>\n"
for row in self.rows:
html += (" "*(indent*2)) + "<th>" + row + "</th>\n"
html += (" "*indent) + "</thead>\n"
html += (" "*indent) + "<tbody>\n"
for col in self.cols:
html += (" "*(indent*2)) + "<tr>\n"
for field in col:
html += (" "*(indent*3)) + "<td>\n"
html += (" "*(indent*4)) + str(field) + "\n"
html += (" "*(indent*3)) + "</td>\n"
html += (" "*(indent*2)) + "</tr>\n"
html += (" "*indent) + "</tbody>\n"
html += "</table>"
return html
builder = HtmlTableBuilder()
builder.add_row("index").add_row("language")
builder.add_col([0, "Python"]).add_col([1, "Kotlin"])
print(builder.build())6 -
Algolia says:
"So our price widget doesn't allow decimals, you'll have to create a custom widget"
I do it.
"Hey, It's not working and I verified it's applying the filter correctly. I noticed my price is a string in your index, maybe that's incorrect and causing it to not work?"
They say: "Yep, you'll need to run an update to fix that and change all to floats" (charges an arm and a leg for the thousands of index operations needed to update the data type)
I clear the index and send a single one as a test, verifying it's a float by casting it using (float) then var_dumping. It shows "double(3.99)", but when it gets to Algolia, it's 0.
So I contact support.
"Hi, I'm sending across floats like you say but it's receiving it as 0, am I doing something wrong? Here's my code and the result of the var_dump"
They respond: "Looks like you're doing it right, but our log shows us receiving 3.999399593939, maybe check your PHP.ini for "serialize_precision" and make sure it's set to -1"
I check and it's fine, then I realize that var_dump is probably rounding to 2 decimal points so I change my cast to (float) number_format($row['Price'], 2) and wallah...it works.
Now I've wasted days of paying for their service, a ton of charges for indexing operations, and it was such a simple fix.
if they had thrown an error for the infinite decimal, that would have helped, but instead I had to reach out to find out that was the issue.
#Frustrated. -
Team lead gave me a task, fix a script he made to update 28k+ lines that wasn't working (he was busy with other stuff)
So I fixed and tested it in our dev environment, which had about 10 lines to use as test
Worked well, but a single select getting 1 column in a table is taking more than 40 secs, I need this select to run for every line (I tried making it get all data at once but it was getting duplicated entries)
The damn table doesn't have index, I think this will be the longest script I've ever made 😅😅😅 -
You have an array of objects with a startIndex and endIndex representing their position in a string & you want to end up with a nested structure to represent their relation in the string (i.e., A is from 1 to 10, B from 3 to 5, C from 4 to 5, D from 7 to 9). How would you do it?
Input array looks like: https://gist.github.com/AmyShackles...
Desired output looks like: https://gist.github.com/AmyShackles...
(Though what I really have to start with is an object whose keys are the start index and whose values are values of each element in the above input array, so if you can think of a way to morph it without needing to turn it into an array first that’d also be cool)
Figured I’ve been stuck on this part of my side project for long enough that I really should just make a desperate cry for help. ❤️question please help with a little help from my friends man i hate data structures regex parser still killing me5 -
Oh yeah ... Java is cool in an utterly sick way even that i can't seem to find a non-retarded built-in stack data structure
Call me a racist, but java.util.Stack has a removeIf() method in case you want to remove odd numbers:
import java.util.Stack;
public class App {
public static void main(String[] args) {
int arr[] = { 2, 4, 7, 11, 13, 16, 19 };
Stack<Integer> s = new Stack<Integer>();
for (int i = 0; i < arr.length; i++) {
s.push(arr[i]);
}
s.removeIf((n) -> (n % 2 == 1));
System.out.println(s); // [2, 4, 16]
}
}
Stop using java.util.Stack they said, a legacy class they said, instead i should use java.util.ArrayDeque, but frankly i can still keep up being racist (in a reversed manner):
import java.util.ArrayDeque;
import java.util.Deque;
public class App {
public static void main(String[] args) {
int arr[] = { 2, 4, 7, 11, 13, 16, 19 };
Deque<Integer> s = new ArrayDeque<Integer>();
for (int i = 0; i < arr.length; i++) {
s.push(arr[i]);
}
s.removeIf((n) -> (n % 2 == 1));
System.out.println(s); // [16, 4, 2]
}
}
The fact that you can iterate through java.util.Stack is amazing, but the ability to insert element in a specified index:
import java.util.Stack;
public class App {
public static void main(String[] args) {
int arr[] = { 2, 4, 7, 11, 13, 16, 19 };
Stack<Integer> s = new Stack<Integer>();
for (int i = 0; i < arr.length; i++) {
s.push(arr[i]);
}
s.add(2, 218);
System.out.println(s); // [2, 4, 218, 7, 11, 13, 16, 19]
}
}
That's what happens when you inherit java.util.Vector, which is only done a BRAIN OVEN person, a very brain oven even that it will revert to retarded
If you thought about using this type of bullshit in Java get yourself prepared to beat the disk for hours when you accidentally call java.util.Stack<T>.add(int index, T element) instead of java.util.Stack<T>.push(T element), you will probably end up breaking the disk or your hand, but not solving the issue
WHY THE F*** CAN'T WE HAVE A WORKING NORMAL STACK ?5 -
Question for someone who uses Mongo Atlas Search:
If I'm only interested in autocomplete from the start of the text, which is more performant?
1) standard analyzer + edgeGram tokenizer
2) keyword analyzer + edgeGram tokenizer
I don't see why I should index separate words if I don't care about random positions :/
Thank you6 -
I want to keep 1 year of daily indexes but for the ones older than 30 days, unloaded from memory. But accessible when needed.
So like say there's a performance issue today and I want to compare all the activity against 2 months ago. I can open the old index and search it.
Can you do that, does closing remove it from memory? Otherwise how would you do that?6 -
Hang with me! This is *not* a math shitpost, I repeat, it is NOT a math shitpost, not entirely anyway.
It appears there is for products of two non-trivial factors, a real number n (well a rational number anyway) such that p/n = i (some number in the set of integers), whos factor chain is apparently no greater than floor(log(log(p))**2)-2, and whos largest factor is never greater than p^(1/4).
And that this number is at least derivable, laboriously with the following:
where p=a*b
https://pastebin.com/Z4thebha
And assuming you have the factors of p/z = jkl..
then instead of doing
p/(jkl..) = z
you can do
p-(jkl) to get the value of [result] whos index is a-1
Getting the actual factor tree of p/z is another matter, but its a start.
Edit: you have to provide your own product.
Preferably import Decimal first.3 -
How to Jitter Click and Increase Clicks per Second?
If you are a gamer who wants to increase clicks per second speed, you must learn how to jitter click. Here, I am sharing an easy step-by-step process of jitter clicking and how to master the technique with practice.
For those who are new to the concept of jitter clicking, let me first tell you about that.
What is Jitter Clicking?
Jitter Clicking is an advanced mouse-clicking technique that gives you more clicks per second on the CPS test ( https://cpstest.pro ) than the regular way of clicking. You use your forearm and wrist muscles to create vibrations in the hand and use it to make more clicks in less time.
How to Jitter Click? Step by Step Guide
If you want to learn jitter clicking, follow the steps provided below.
1. First, hold the mouse properly. A claw grip works the best for jitter clicking.
2. Start by making for forearm stiff and putting all the stress on the wrist muscle.
3. Use the stressed wrist to create vibration in your hand and the index finger.
3. The index finger must be on exactly the top of the mouse button keeping it just a few millimeters away.
4. The vibration in the finger will make the mouse button click way faster than normal
That's it. You've successfully learned how to jitter click. It might seem a bit difficult in the beginning, but after you practice it enough, you'll be able to master jitter clicking within a week.
Among all my gamer friends who started using jitter clicking, most of them have seen significant improvement in their clicking speed. Those who had around 6-8 CPS earlier, started to get 11-12 CPS within a week of jitter click practice. A few of them went even beyond that with 14 clicks per second.
According to stats, jitter clicking is recommended as the fastest way of clicking.
Clearly, it is a good technique but those who are starting to jitter click should take proper precautions as the method involves unusual muscle movements and may lead to wrist pain, cramps, or even carpal tunnel syndrome.
It is advised that gamers take sufficient breaks while jitter clicking and not perform it for long time periods in one go.
Keeping this in mind, I hope you'll definitely get better clicks per second using the jitter click technique.4