Do all the things like ++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatarSign Up
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple APILearn More
C0D465723364dI think that's Google vs Avast here 😂
Let's do security by blasting a hole in encryption.
@IntrusionCM What is the security impact actually? Confidentiality, sure. Only that a work PC is supposed to only be used for company stuff anyway.
It's perfectly reasonable that company admins don't want their users to securely and confidentially download malware. That's why an https breaking proxy is totally common.
Breaking it at a local client level is imho unacceptable.
There are numerous issues, especially since you now have a viable attack vector on the PC to decipher any HTTPs traffic.
And security holes in AntiVirus aren't uncommon, rather they're common.
Usually I'd say a proxy / network wide is fine...
But look at the TLS 1.3 dilemma.
There are too many proxies who are non conformant, insecure, or just plain wrong.
The mass of patches that flowed in for "security appliances" to be secure and TLS 1.3 compatible left a very bad taste in my mouth.
At least network wide you don't have to updates a gazillion clients when eg certificates need to be exchanged / replaced.
@IntrusionCM You have to update all clients anyway because for a https-breaking proxy, the certificate of that proxy is stored in all clients. Otherwise, the clients wouldn't accept the fake certificate returned by that proxy.
And your original argument that breaking https makes things less secure applies to both local and proxy solutions.
@Fast-Nop I wrote the comment in short, I'm guilty of that... Since explaining it correctly would take far too long.
When you have a proxy, I'd expect the certificate to be installed in an easy, mass updateable way - certificates are usually small / easy to distribute.
Eg. microsoft group policies / linux software package's. That's for me an requirement, not an optional thing.
So yes, clients must be updated, but there is an automated, testable way to do so without breaking stuff at large.
That's an whole different thing to a program that generates it's own root certificate and deploys it's locally.
Especially since you have to update only the certificate, not an program. Program updates are usually a PITA, because you don't know what's going to happen.
Updating Antivirus or Security software is usually painful, because you cannot test the whole program. And when something goes wrong... It requires a lot of manual intervention.
And yes, having a proxy is less secure, that fact doesn't change.
But when you'll update a proxy, it's a single component.
Not multiple Windows / Linux / ... clients with different hardware / software etc.
So it still sucks, but at least it sucks in a centralized way.
More nitpicking? ;)
Tr3333364dWell if everyone hating about Atlassian then show me better tools which work great together as Jira, Confluence and Bitbucket does.
@Tr33 they don’t work great together, they all work shit.
We pay for confluence but nobody uses it because it’s so slow and unstable.
We all hate using Jura, but again business bought into it. We don’t even track most of our work on it as it spends more time hindering than helping.
I moved my team off bitfucket and all our projects over to GitHub because it actually works.
So sure, they integrate with each other fantastically, but what’s the point in 2”3 tools that integrate but simply don’t work.
We moved all our documentation to docsify, we’ve added more docs to it in a week than the year we been using confluence, only the business people are actually using it.
Tr3333364dI worked in three companies and over 6 giant projects. All used different versions of Jira, Confluence and Bitbucket. But I never hat that feeling that something isn't working right. Everything was fast and smooth.
For me it looks like everyone is hating because they didn't configurate the projects correctly. To hate about is easy, but to look for the problem or a good solution is not.
And even everyone is hating, almost everyone is still using the software.
Gregozor21215691363dBreaking encryption on the client wont change much... if the client is infected the virus can just scan the memory where the data is already decrypted...
RTRMS3858363d@Tr33 I am complaining because it’s slow and the ux is rubbish. There was a time when it was great, about 5 years or so ago they did a big refactor and implemented react for all the uis and in the process made it a complex unreliable mess.
I often see stale data, connection issues and timeouts, there are like 4 different menus that do different things with no logical connection to what you are trying to do.
Adding a list to a Kira story is like winning the bloody lottery. 9 times out of 10 you just get blankness.
Honestly if they just rewound back to 2015 it would be 10x better than what it is now.
Oktokolo3836363dHard to compare current snake oil products.
But i would go for the least intrusive one wich checks that box on the compliance checklist...
I agree that Atlassian products "work together" reasonably well.
But in my opinion, the great integration between their products does not compensate for the mediocre user experience of the products themselves.
Both Github and Gitlab also do task management, documentation and VCS in a single platform, and in my opinion they're both better in terms of simplicity, ecosystem and UX.
Product choice depends on requirements though. For example, solutions like Asana (project management) and Notion (simple kanban & docs) are very "messy and freeform", which works well in a flat organization with high levels of ownership and trust, but not in a highly hierarchical organization where exact access control methods and fine grained management are required.
Still, for most purposes it's all just fluff -- I've worked at a company where everything was done purely through a Git server (no web UI even), and it worked extremely well.
Docs were written in markdown, within the code project, so it is version controlled together with the code.
Tasks were nothing more than git branches. New branches were created by product owners, who would push such a new branch in which the documentation was updated to reflect the new feature requirements. As a dev, you would just do a git diff and read the task requirements from the markdown file.
Branches also used annotated git tags extensively to communicate task assignments, priorities, and branch statuses -- paired with a bunch of nicely aliased git commands to sort & grep through tag annotations, things like "find me all branches where the latest tag has my email as assignee, and order them by priority"
For people who aren't bothered by using CLI, their methodology was heavenly.