Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "one bit = 2 values"
-
I’m a senior dev at a small company that does some consulting. This past October, some really heavy personal situation came up and my job suffered for it. I raised the flag and was very open with my boss about it and both him and my team of 3 understood and were pretty cool with me taking on a smaller load of work while I moved on with some stuff in my life. For a week.
Right after that, I got sent to a client. “One month only, we just want some presence there since it’s such a big client” alright, I guess I can do that. “You’ll be in charge of a team of a few people and help them technically.” Sounds good, I like leading!
So I get here. Let’s talk technical first: from being in a small but interesting project using Xamarin, I’m now looking at Visual Basic code, using Visual Studio 2010. Windows fucking Forms.
The project was made by a single dev for this huge company. She did what she could but as the requirements grew this thing became a behemoth of spaghetti code and User Controls. The other two guys working on the project have been here for a few months and they have very basic experience at the job anyways. The woman that worked on the project for 5 years is now leaving because she can’t take it anymore.
And that’s not the worse of it. It took from October to December for me to get a machine. I literally spent two months reading on my cellphone and just going over my shitty personal situation for 8 hours a day. I complained to everyone I could and nothing really worked.
Then I got a PC! But wait… no domain user. Queue an extra month in which I could see the Windows 7 (yep) log in screen and nothing else. Then, finally! A domain user! I can log in! Just wait 2 extra weeks for us to give your user access to the subversion rep and you’re good to go!
While all of this went on, I didn’t get an access card until a week ago. Every day I had to walk to the reception desk, show my ID and request they call my boss so he could grant me access. 5 months of this, both at the start of the day and after lunch. There was one day in particular, between two holidays, in which no one that could grant me access was at the office. I literally stood there until 11am in which I called my company and told them I was going home.
Now I’ve been actually working for a while, mostly fixing stuff that works like crap and trying to implement functions that should have been finished but aren’t even started. Did I mention this App is in production and being used by the people here? Because it is. Imagine if you will the amount of problems that an application that’s connecting to the production DB can create when it doesn’t even validate if the field should receive numeric values only. Did I mention the DB itself is also a complete mess? Because it is. There’s an “INDEXES” tables in which, I shit you not, the IDs of every other table is stored. There are no Identity fields anywhere, and instead every insert has to go to this INDEXES table, check the last ID of the table we’re working on, then create a new registry in order to give you your new ID. It’s insane.
And, to boot, the new order from above is: We want to split this app in two. You guys will stick with the maintenance of half of it, some other dudes with the other. Still both targeting the same DB and using the same starting point, but each only working on the module that we want them to work in. PostmodernJerk, it’s your job now to prepare the app so that this can work. How? We dunno. Why? Fuck if we care. Kill you? You don’t deserve the swift release of death.
Also I’m starting to get a bit tired of comments that go ‘THIS DOESN’T WORK and ‘I DON’T KNOW WHY WE DO THIS BUT IT HELPS and my personal favorite ‘??????????????????????14 -
POSTMORTEM
"4096 bit ~ 96 hours is what he said.
IDK why, but when he took the challenge, he posted that it'd take 36 hours"
As @cbsa wrote, and nitwhiz wrote "but the statement was that op's i3 did it in 11 hours. So there must be a result already, which can be verified?"
I added time because I was in the middle of a port involving ArbFloat so I could get arbitrary precision. I had a crude desmos graph doing projections on what I'd already factored in order to get an idea of how long it'd take to do larger
bit lengths
@p100sch speculated on the walked back time, and overstating the rig capabilities. Instead I spent a lot of time trying to get it 'just-so'.
Worse, because I had to resort to "Decimal" in python (and am currently experimenting with the same in Julia), both of which are immutable types, the GC was taking > 25% of the cpu time.
Performancewise, the numbers I cited in the actual thread, as of this time:
largest product factored was 32bit, 1855526741 * 2163967087, took 1116.111s in python.
Julia build used a slightly different method, & managed to factor a 27 bit number, 103147223 * 88789957 in 20.9s,
but this wasn't typical.
What surprised me was the variability. One bit length could take 100s or a couple thousand seconds even, and a product that was 1-2 bits longer could return a result in under a minute, sometimes in seconds.
This started cropping up, ironically, right after I posted the thread, whats a man to do?
So I started trying a bunch of things, some of which worked. Shameless as I am, I accepted the challenge. Things weren't perfect but it was going well enough. At that point I hadn't slept in 30~ hours so when I thought I had it I let it run and went to bed. 5 AM comes, I check the program. Still calculating, and way overshot. Fuuuuuuccc...
So here we are now and it's say to safe the worlds not gonna burn if I explain it seeing as it doesn't work, or at least only some of the time.
Others people, much smarter than me, mentioned it may be a means of finding more secure pairs, and maybe so, I'm not familiar enough to know.
For everyone that followed, commented, those who contributed, even the doubters who kept a sanity check on this without whom this would have been an even bigger embarassement, and the people with their pins and tactical dots, thanks.
So here it is.
A few assumptions first.
Assuming p = the product,
a = some prime,
b = another prime,
and r = a/b (where a is smaller than b)
w = 1/sqrt(p)
(also experimented with w = 1/sqrt(p)*2 but I kept overshooting my a very small margin)
x = a/p
y = b/p
1. for every two numbers, there is a ratio (r) that you can search for among the decimals, starting at 1.0, counting down. You can use this to find the original factors e.x. p*r=n, p/n=m (assuming the product has only two factors), instead of having to do a sieve.
2. You don't need the first number you find to be the precise value of a factor (we're doing floating point math), a large subset of decimal values for the value of a or b will naturally 'fall' into the value of a (or b) + some fractional number, which is lost. Some of you will object, "But if thats wrong, your result will be wrong!" but hear me out.
3. You round for the first factor 'found', and from there, you take the result and do p/a to get b. If 'a' is actually a factor of p, then mod(b, 1) == 0, and then naturally, a*b SHOULD equal p.
If not, you throw out both numbers, rinse and repeat.
Now I knew this this could be faster. Realized the finer the representation, the less important the fractional digits further right in the number were, it was just a matter of how much precision I could AFFORD to lose and still get an accurate result for r*p=a.
Fast forward, lot of experimentation, was hitting a lot of worst case time complexities, where the most significant digits had a bunch of zeroes in front of them so starting at 1.0 was a no go in many situations. Started looking and realized
I didn't NEED the ratio of a/b, I just needed the ratio of a to p.
Intuitively it made sense, but starting at 1.0 was blowing up the calculation time, and this made it so much worse.
I realized if I could start at r=1/sqrt(p) instead, and that because of certain properties, the fractional result of this, r, would ALWAYS be 1. close to one of the factors fractional value of n/p, and 2. it looked like it was guaranteed that r=1/sqrt(p) would ALWAYS be less than at least one of the primes, putting a bound on worst case.
The final result in executable pseudo code (python lol) looks something like the above variables plus
while w >= 0.0:
if (p / round(w*p)) % 1 == 0:
x = round(w*p)
y = p / round(w*p)
if x*y == p:
print("factors found!")
print(x)
print(y)
break
w = w + i
Still working but if anyone sees obvious problems I'd LOVE to hear about it.38 -
I've been lurking on devrant a while now, I figure it's time to add my first rant.
Little background and setting a frame of reference for the rant: I'm currently a software engineer in the bioinformatics field. I have a computer science background whereas a vast majority of those around me, especially other devs, are people with little to no formal computer background - mostly biology in some form or another. Now, this said, a lot of the other devs are excellent developers, but some are as bad as you could imagine.
I started at a new company in April. About a month after joining a dev who worked there left, and I inherited the pipeline he maintained. Primarily 3 perl scripts (yes, perl, welcome to bioinformatics, especially when it comes to legacy code like is seen in this pipeline) that mostly copied and generated some files and reports in different places. No biggie, until I really dove in.
This dev, which I barely feel he deserves to be called, is a biology major turned computer developer. He was hired at this company and learned to program on the job. That being said, I give him a bit of a pass as I'm sure he did not have had an adequate support structure to teach him any better, but still, some of this is BS.
One final note: not all of the code, especially a lot of the stupid logic, in this pipeline was developed by this other dev. A lot of it he adopted himself. However, he did nothing about it either, so I put fault on him.
Now, let's start.
1. perl - yay bioinformatics
2. Redundant code. Like, you literally copied 200+ lines of code into a function to change 3 lines in that code for a different condition, and added if(condition) {function();} else {existing code;}?? Seriously??
3. Whitesmiths indentation style.. why? Just, why? Fuck off with that. Where did you learn that and why do you insist on using it??
4. Mixing of whitesmiths and more common K&R indentation.
5. Fucked indentation. Code either not indented and even some code indented THE WRONG WAY
6. 10+ indentation levels. This, not "terrible" normally, but imagine this with the last 3 points. Cannot follow the code at freaking all.
7. Stupid logic. Like, for example, check if a string has a comma in it. If it does, split the string on the comma and push everything to an array. If not, just push the string to the array.... You, you know you can just split the string on the comma and push it, right?? If there is no comma it will be an array containing the original string.. Why the fuck did you think you needed to add a condition for that??
8. Functions that are called to set values in global variables, arrays, and hashes.. function has like 5 lines in it and is called in 2 locations. Just keep that code in place!
9. 50+ global variables/hashes/arrays in one of the scripts with no clear way to tell how/when values are set nor what they are used for.
10. Non-descriptive names for everything
11. Next to no comments in the code. What comments there are are barely useful.
12. No documentation
There's more, but this is all I can think to identify right now. All together these issues have made this pipeline the pinnacle of all the garbage that I've had to work on.
Attaching some screenshots of just a tiny fraction of the code to show some of the crap I'm talking about.6 -
Dogecoin hit USD $0.40 recently, which means it's time for the Crypto Rant.
TL;DR: Dogecoin is shit and is logically guaranteed to eventually fall unless it is fundamentally changed.
===========================
If you know how Crypto works under the hood, you can skip to the next section. If you don't, here's the general xyz-coin formula:
Money is sent via transactions, which are validated by *anybody*.
Since transactions are validated by anybody, the system needs to make sure you're not fucking it up on purpose.
The current idea (that most coins use today) is called proof-of-work. In short, you're given an extremely difficult task, and the general idea is you wouldn't be willing to do that work if you were just going to fuck up the system.
For validating these transactions, you are rewarded twofold:
1) You are given a fixed-size prize of the currency from the system itself. This is how new currency is introduced, or "minted" if you prefer.
2) You are given variable-size and user-determined prize called "transaction fees", but it could be more accurately called a "bribe" since it's sole purpose is to entice miners to add YOUR transaction to their block.
This system of validation and reward is called mining.
===========================
This smaller section compares the design o f BTC to Dogecoin - which will lead to my final argument
In BTC, the time between blocks (chunks of data which record transactions and are added to the chain, hence blockchain) is ten minutes. Every ten minutes, BTC transactions are validated and new Bitcoins are born.
In Dogecoin, the time between blocks is only one minute. In Theory, this means that mining Dogecoin is about ten times easier, because the system expects you to be able to solve the proof of work in an average of one minute.
The huge difference between BTC and Doge is the block reward (Fixed amount; new coins minted). The block reward for BTC is somewhat complicated compared to Doge: It started as 50 BTC per block and every 4 years it is halved ("the great halving"). Right now it's 6.25 BTC per block. Soon, the block reward will be almost nothing until BTC hits it's max of 21 million bitcoins "minted".
Dogecoin reward is 10,000 coins per block. And it will be that way for the end of time - no maximum, no great halving. And remember, for every 1 BTC block mined, 10 Doge blocks are mined.
===========================
Bitcoin and Dogecoin are now the two most popular coins in pop culture. What makes me angry is the widespread misunderstanding of the differences between the two. It is likely that most investors buy Dogecoin thinking they're getting in "early" because it's so cheap. They think it's cheap because it isn't as popular as Bitcoin yet. They're wrong. It's cheap because of what's outlined in section two of this rant.
Dogecoin is actually not very far off Bitcoin. Do the math: there's a bit over 100 billion Dogecoin in circulation (130b). There's about 20 million BTC. Calculate their total CURRENT values:
130b * $0.40 = 52b
20m * $60k = 1.2t
...and Doge is rising much, much faster than BTC because of the aforementioned lack of understanding.
The most common thing I hear about Doge is that "nobody expects it to reach Bitcoin levels" (referring to being worth 60k a fucking coin). They don't realize that if Doge gets to be worth just $10 a coin, it will not just reach Bitcoin levels but overtake Bitcoin in value ($1.3T).
===========================
It's worth highlighting that Dogecoin is literally designed to fail. Since it lacks a cap on new coins being introduced, it's just simple math that no matter how much Doge rises, it will eventually be worthless. And it won't take centuries, remember that 100k new Doge are mined EVERY TEN MINUTES. 1,440 minutes in a day * 10K per minute is 14.4 million new coins per day. That's damn near every Bitcoin to ever exist mined every day in Dogecoin10 -
Today I finished my robotics project. I had in my team a total idiot (the one who used the hidden divs, some might remember from another rant). I wanted to share with you the beginning of a ranting adventure.
Me: "you can begin with a simple task. I will send you the obstacle avoidance sensors values from Arduino, and you will send the data for the Arduino motors to dodge the obstacle".
The sensors give 1 if clear, 0 if obstacle is detected.
Below is his code (which I brutally rewrote in front of him).
Now, in the final version of the robot we have something like 9 sensors of the same time to work with.
Imagine what would have happened if we kept him coding. (Guess it: 2^9 statements! :D)
I was not that evil, I tried to give him some chances to prove himself willing to improve. None of them were used rightfully.
I'm so fucking glad we finished. I'm not gonna see him anymore, even if I'd like to be a technical interviewer for hiring just to demolish him.
I'm not always that evil, I promise (?)
Ps. He didn't even have any idea on what JSON is, even if we had already seen it during FIVE YEARS of computer engineering. (And should've known anyway if he had a bit of curiosity for the stuff he "studies")10 -
Two big moments today:
1. Holy hell, how did I ever get on without a proper debugger? Was debugging some old code by eye (following along and keeping track mentally, of what the variables should be and what each step did). That didn't work because the code isn't intuitive. Tried the print() method, old reliable as it were. Kinda worked but didn't give me enough fine-grain control.
Bit the bullet and installed Wing IDE for python. And bam, it hit me. How did I ever live without step-through, and breakpoints before now?
2. Remember that non-sieve prime generator I wrote a while back? (well maybe some of you do). The one that generated quasi lucas carmichael (QLC) numbers? Well thats what I managed to debug. I figured out why it wasn't working. Last time I released it, I included two core methods, genprimes() and nextPrime(). The first generates a list of primes accurately, up to some n, and only needs a small handful of QLC numbers filtered out after the fact (because the set of primes generated and the set of QLC numbers overlap. Well I think they call it an embedding, as in QLC is included in the series generated by genprimes, but not the converse, but I digress).
nextPrime() was supposed to take any arbitrary n above zero, and accurately return the nearest prime number above the argument. But for some reason when it started, it would return 2,3,5,6...but genprimes() would work fine for some reason.
So genprimes loops over an index, i, and tests it for primality. It begins by entering the loop, and doing "result = gffi(i)".
This calls into something a function that runs four tests on the argument passed to it. I won't go into detail here about what those are because I don't even remember how I came up with them (I'll make a separate post when the code is fully fixed).
If the number fails any of these tests then gffi would just return the value of i that was passed to it, unaltered. Otherwise, if it did pass all of them, it would return i+1.
And once back in genPrimes() we would check if the variable 'result' was greater than the loop index. And if it was, then it was either prime (comparatively plentiful) or a QLC number (comparatively rare)--these two types and no others.
nextPrime() was only taking n, and didn't have this index to compare to, so the prior steps in genprimes were acting as a filter that nextPrime() didn't have, while internally gffi() was returning not only primes, and QLCs, but also plenty of composite numbers.
Now *why* that last step in genPrimes() was filtering out all the composites, idk.
But now that I understand whats going on I can fix it and hypothetically it should be possible to enter a positive n of any size, and without additional primality checks (such as is done with sieves, where you have to check off multiples of n), get the nearest prime numbers. Of course I'm not familiar enough with prime number generation to know if thats an achievement or worthwhile mentioning, so if anyone *is* familiar, and how something like that holds up compared to other linear generators (O(n)?), I'd be interested to hear about it.
I also am working on filtering out the intersection of the sets (QLC numbers), which I'm pretty sure I figured out how to incorporate into the prime generator itself.
I also think it may be possible to generator primes even faster, using the carmichael numbers or related set--or even derive a function that maps one set of upper-and-lower bounds around a semiprime, and map those same bounds to carmichael numbers that act as the upper and lower bound numbers on the factors of a semiprime.
Meanwhile I'm also looking into testing the prime generator on a larger set of numbers (to make sure it doesn't fail at large values of n) and so I'm looking for more computing power if anyone has it on hand, or is willing to test it at sufficiently large bit lengths (512, 1024, etc).
Lastly, the earlier work I posted (linked below), I realized could be applied with ECM to greatly reduce the smallest factor of a large number.
If ECM, being one of the best methods available, only handles 50-60 digit numbers, & your factors are 70+ digits, then being able to transform your semiprime product into another product tree thats non-semiprime, with factors that ARE in range of ECM, and which *does* contain either of the original factors, means products that *were not* formally factorable by ECM, *could* be now.
That wouldn't have been possible though withput enormous help from many others such as hitko who took the time to explain the solution was a form of modular exponentiation, Fast-Nop who contributed on other threads, Voxera who did as well, and support from Scor in particular, and many others.
Thank you all. And more to come.
Links mentioned (because DR wouldn't accept them as they were):
https://pastebin.com/MWechZj912 -
My best code review experience?
Company hired a new department manager and one of his duties was to get familiar with the code base, so he started rounds of code reviews.
We had our own coding standards (naming, indentation, etc..etc) and for the most part, all of our code would pass those standards 100%.
One review of my code was particularly brutal. I though it was perfect. In-line documentation, indentation, followed naming standards..everything. 'Tom' kept wanting to know the 'Why?'
Tom: 'This method where it validates the amount must be under 30. Why 30? Why is it hard-coded and not a parameter?'
<skip what it seemed like 50 more 'Why...?' questions>
Me: "I don't remember. I wrote that 2 years ago."
Tom: "I don't care if you wrote it yesterday. I have pages of code I want you to verify the values and answer 'Why?' to all of them. Look at this one..."
'Tom' was a bit of a hard-ass, but wow, did I learn A LOT. Coding standards are nice, but he explained understanding the 'What' is what we are paid for. Coders can do the "What" in their sleep. Good developers can read and understand code regardless of a coding standard and the mediocre developers use standards as a crutch (or worse, used as a weapon against others). Great developers understand the 'Why?'.
Now I ask 'Why?' a lot. Gotten my fair share of "I'm gonna punch you in the face" looks during a code review, but being able to answer the 'Why?' solidifies the team with the goals of the project.3 -
Next week I'm starting a new job and I kinda wanted to give you guys an insight into my dev career over the last four years. Hopefully it can give some people some insight into how a career can grow unexpectedly.
While I was finishing up my studies (AI) I decided to talk to one of these recruiters and see what kind of jobs I could get as soon as I would be done. The recruiter immediately found this job with a Java consultancy company that also had a training aspect on the side (four hours of training a week).
In this job I learned a lot about many things. I learned about Spring framework, clean code, cloud deployment, build pipelines, Microservices, message brokers and lots more.
As this was a consultancy company, I was placed at different companies. During my time here I worked on two different projects.
The first was a Microservices project about road traffic data. The company was a mess, and I learned a lot about company politics. I think I never saw anything I built really released in my 16 months there.
I also had to drive 200km every day for this job, which just killed me. And after far too long I was finally moved to the second company, which was much closer.
The second company was a fintech startup funded by a bank. Everything was so much better than the traffic company. There was a very structured release schedule, with a pretty okay scrum implementation. Every team had their own development environment on aws which worked amazingly. I had a lot of fun at this job, with many cool colleagues. And all the smart people around me taught me even more about everything related to working in software engineering.
I quit my job at the consultancy company, and with that at the fintech place, because I got an opportunity I couldn't refuse. My brother was working for Jordan Belfort, the Wolf of Wallstreet, and he said they needed a developer to build a learning platform. So I packed my bags and flew to LA.
The office was just a villa on the beach, next to Jordan's house. The company was quite small and there were actually no real developers. There was a guy who claimed to be the cto of the company, but he actually only knew how to do WordPress and no one had named him cto, which was very interesting.
So I sat down with Jordan and we talked about the platform he wanted to build. I explained how the things he wanted would eventually not be able with WordPress and we needed to really start building software and become a software development company. He agreed and I was set to designing a first iteration of the platform.
Before I knew it I was building the platform part by part, adding features everywhere, setting up analytics, setting up payment flows, monitoring, connecting to Salesforce, setting up build pipelines and setting up the whole aws environment. I had to do everything from frontend to the backest of backends. Luckily I could grow my team a tiny bit after a while, until we were with four. But the other three were still very junior, so I also got the task of training them next to developing.
Still I learned a lot and there's so much more to tell about my time at this company, but let's move forward a bit.
Eventually I had to go back to the Netherlands because of reasons. I still worked a bit for them from over here, but the fun of it was gone without my colleagues around me, so I quit last September.
I noticed I was all burned out, had worked far too much, so I decided to take a few months off and figure out what I wanted to do with my life. I even wondered whether I wanted to stay in programming.
Fast forward to last few weeks. I figured out I actually did want to work in software still, but now I would focus on getting the right working circumstances. No more driving 3 hours every day, no more working 12 hours every day. Just work close to home and find a company with the right values.
So I started sending out resumes and I gave one recruiter the chance to arrange some interviews too. I spoke to 7 companies in the span of one week. And they were all very interested. Eventually I narrowed it down to 2 companies and asked them for offers. And the company that actually had my preference offered me significantly more than I asked for, which settled the deal.
So tomorrow I'm officially signing with them, and starting next week I'll be developing in Kotlin, diving into functional programming and running our code in serverless environments. I'm very excited! -
LXC, no doubt.
I mean to be fair, LXC is an amazing container runtime once you manage to set it up. But setting it up is the hard bit. Starting off with LXC 2.x, it was a nightmare to find out how to get things like the storage backends working. But with ZFS it ended up being alright. Find some arcane values to stick in the /etc/lxc/default.conf to use ZFS as the backend and then the default storage location on those ZFS pools (I'll get back to that later), and it worked alright. Again, once it works it's great, but setting it up and finding the right configuration keys is absolute hell.
So, LXC 2.x for a while and a few months ago I finally ended up upgrading to 3.x. Every single configuration key changed. Every single one of them, and that's why I had to 1) learn LXC all over again, and 2) redeploy each and every one of my containers. That process is still not entirely completed. ZFS backend was once again a dive into arcane configuration keys found on forums and whatnot. Yeah.. official documentation has none of it. Oh and in 3.x you now also have to dodge the torrent of "just use LXD m8" messages. Yeah, very helpful when LXD is also the ONLY way to reasonably configure it. Absolutely beautiful. Oh and as far as the ZFS default storage location goes (such as ssd/lxc/ct)? Yeah forget about it. There's no configuration option for it anymore, and the default is "lxc". In ZFS lingo that means that LXC has the audacity to demand a whole pool for itself. No. No you don't deserve a whole pool for yourself. But hey at least you can define the storage location to use in the lxc-create command! Every single time you have to define it in lxc-create. I abstracted it away into my own LXC interface, so no big deal really. But yeah... That could absolutely be better. And in 2.x it was actually better.
Oh and btrfs, the filesystem I'd like to use on low memory systems because ZFS' ARC is too much on such systems? Yeah forget about it. I still have no idea how to do it. Thank you LXC and its amazing documentation!
And if you want the icing on the cake for LXC's terrible documentation, see their repo's index page at https://github.com/lxc/lxc/.... Yeah, it's totally still at 2.x... That's how well they maintain that. Even Debian has 3.x now. And if you look at the branches, you'll find that even 4.x is already available and considered stable. -
Any recommendations for introductory books on electrical engineering? I'm looking for something that goes into detail on the basics: tension, current, resistance, inductance, capacitance, etc. I have very little knowledge on the subject (I know what the basic components do and that's it) and I found myself struggling a bit with the most basic concept: voltage.
I grabbed my multimeter, a few resistors and a battery and played around a bit. For some reason it doesn't really "click" why on a 5v circuit with 3 2.2k ohm resistors (I think) the voltage around each resistor was like ~1.3 volts or something, while on a circuit with 2 resistors the voltage accross each one was ~2.3 something volts (I don't remember exact values). Like, I know that voltage is a difference in potential, but I still don't get it and idk what I'm missing. Why is the difference in potential accross a resistor different if the circuit has 2 resistors in series instead of 3. It kinda makes sense in my head but at the same time it doesn't.
In short, I want to know the "why" stuff works the way it does, not just the "how".
Also, if the book covers common practices, components, and circuits that'd be very helpful. I want to learn how to build well-designed, reliable and safe circuits.11 -
Math question time!
Okay so I had this idea and I'm looking for anyone who has a better grasp of math than me.
What if instead of searching for prime factors we searched for a number above p?
One with a certain special property. BEAR WITH ME. I know I make these posts a lot and I'm a bit of a shitposter, but I'm being genuine here.
Take this cherry picked number, 697 for example.
It's factors are 17, and 41. It's trivial but just for demonstration.
If we represented it's factors as a bit string, where each bit represents the index that factor occurs at in a list of primes, it looks like this
1000001000000
When converted back to an integer that number becomes 4160, which we will call f.
And if we do 4160/(2**n) until the result returns
a fractional component, then N in this case will be 7.
And 7 is the index of our lowest factor 17 (lets call it A, and our highest factor we'll call B) in our primes list.
So the problem is changed from finding a factorization of p, to finding an algorithm that allows you to convert p into f. Once you have f it's a matter of converting it to binary, looking up the indexes of all bits set to 1, and finding the values of those indexes in the list of primes.
I'm working on doing that and if anyone has any insights I'm all ears.9 -
Top 12 C# Programming Tips & Tricks
Programming can be described as the process which leads a computing problem from its original formulation, to an executable computer program. This process involves activities such as developing understanding, analysis, generating algorithms, verification of essentials of algorithms - including their accuracy and resources utilization - and coding of algorithms in the proposed programming language. The source code can be written in one or more programming languages. The purpose of programming is to find a series of instructions that can automate solving of specific problems, or performing a particular task. Programming needs competence in various subjects including formal logic, understanding the application, and specialized algorithms.
1. Write Unit Test for Non-Public Methods
Many developers do not write unit test methods for non-public assemblies. This is because they are invisible to the test project. C# enables one to enhance visibility between the assembly internals and other assemblies. The trick is to include //Make the internals visible to the test assembly [assembly: InternalsVisibleTo("MyTestAssembly")] in the AssemblyInfo.cs file.
2. Tuples
Many developers build a POCO class in order to return multiple values from a method. Tuples are initiated in .NET Framework 4.0.
3. Do not bother with Temporary Collections, Use Yield instead
A temporary list that holds salvaged and returned items may be created when developers want to pick items from a collection.
In order to prevent the temporary collection from being used, developers can use yield. Yield gives out results according to the result set enumeration.
Developers also have the option of using LINQ.
4. Making a retirement announcement
Developers who own re-distributable components and probably want to detract a method in the near future, can embellish it with the outdated feature to connect it with the clients
[Obsolete("This method will be deprecated soon. You could use XYZ alternatively.")]
Upon compilation, a client gets a warning upon with the message. To fail a client build that is using the detracted method, pass the additional Boolean parameter as True.
[Obsolete("This method is deprecated. You could use XYZ alternatively.", true)]
5. Deferred Execution While Writing LINQ Queries
When a LINQ query is written in .NET, it can only perform the query when the LINQ result is approached. The occurrence of LINQ is known as deferred execution. Developers should understand that in every result set approach, the query gets executed over and over. In order to prevent a repetition of the execution, change the LINQ result to List after execution. Below is an example
public void MyComponentLegacyMethod(List<int> masterCollection)
6. Explicit keyword conversions for business entities
Utilize the explicit keyword to describe the alteration of one business entity to another. The alteration method is conjured once the alteration is applied in code
7. Absorbing the Exact Stack Trace
In the catch block of a C# program, if an exception is thrown as shown below and probably a fault has occurred in the method ConnectDatabase, the thrown exception stack trace only indicates the fault has happened in the method RunDataOperation
8. Enum Flags Attribute
Using flags attribute to decorate the enum in C# enables it as bit fields. This enables developers to collect the enum values. One can use the following C# code.
he output for this code will be “BlackMamba, CottonMouth, Wiper”. When the flags attribute is removed, the output will remain 14.
9. Implementing the Base Type for a Generic Type
When developers want to enforce the generic type provided in a generic class such that it will be able to inherit from a particular interface
10. Using Property as IEnumerable doesn’t make it Read-only
When an IEnumerable property gets exposed in a created class
This code modifies the list and gives it a new name. In order to avoid this, add AsReadOnly as opposed to AsEnumerable.
11. Data Type Conversion
More often than not, developers have to alter data types for different reasons. For example, converting a set value decimal variable to an int or Integer
Source: https://freelancer.com/community/...2 -
There are 1 type of programmers. Those who understand how to use bits, and those who thinks it should be 10.4