16
RTRMS
14d

Finally, proof that both Avast and Atlassian are shit...

It’s taken how many decades to figure out that Atlassian is nothing more than a well disguised virus.

Comments
  • 3
    I think that's Google vs Avast here 😂
  • 7
    @C0D4
    Atlassian is shit though, for real. Their software is a case study in the conservation of shit theory.
  • 6
    @SortOfTested they where once good, then they tried to fix it, however when they did they UX they spent 6 months researching the best most optimal user friendly ways to develop a web application and used that as a baseline of what NOT to do.
  • 1
    @RTRMS gotta stand out somehow. You need something to differentiate yourself from the other players. It's all marketing shit we just don't understand it. Yet.
  • 3
    @SortOfTested I can't argue with that, but as a heavy user of Jira/confluence it's not the worst.... but certainly could be less "flashy" and more usable.

    It's not like it's upper management that's using it.
  • 3
    @RTRMS

    You Antivirus is scanning an HTTPS resource.

    Which usually means it intercepts HTTP requests by installing eg an root certificate.

    Think about wether you want this or not.

    I'd deactivate that function ASAP...
  • 3
    @IntrusionCM not my computer, work made us install it, I kept it on long enough for them to see it was on and removed it.
  • 2
    @RTRMS That’s what I did with work-mandated Uptycs. I even have a bash alias for killing it again, should I need to.
  • 2
    Let's do security by blasting a hole in encryption.

    *fuckity*
  • 1
    @IntrusionCM What is the security impact actually? Confidentiality, sure. Only that a work PC is supposed to only be used for company stuff anyway.

    It's perfectly reasonable that company admins don't want their users to securely and confidentially download malware. That's why an https breaking proxy is totally common.
  • 0
    @Fast-Nop

    Breaking it at a local client level is imho unacceptable.

    There are numerous issues, especially since you now have a viable attack vector on the PC to decipher any HTTPs traffic.

    And security holes in AntiVirus aren't uncommon, rather they're common.

    Usually I'd say a proxy / network wide is fine...

    But look at the TLS 1.3 dilemma.

    There are too many proxies who are non conformant, insecure, or just plain wrong.

    The mass of patches that flowed in for "security appliances" to be secure and TLS 1.3 compatible left a very bad taste in my mouth.

    At least network wide you don't have to updates a gazillion clients when eg certificates need to be exchanged / replaced.
  • 0
    @IntrusionCM You have to update all clients anyway because for a https-breaking proxy, the certificate of that proxy is stored in all clients. Otherwise, the clients wouldn't accept the fake certificate returned by that proxy.

    And your original argument that breaking https makes things less secure applies to both local and proxy solutions.
  • 1
    @Fast-Nop I wrote the comment in short, I'm guilty of that... Since explaining it correctly would take far too long.

    When you have a proxy, I'd expect the certificate to be installed in an easy, mass updateable way - certificates are usually small / easy to distribute.

    Eg. microsoft group policies / linux software package's. That's for me an requirement, not an optional thing.

    So yes, clients must be updated, but there is an automated, testable way to do so without breaking stuff at large.

    That's an whole different thing to a program that generates it's own root certificate and deploys it's locally.

    Especially since you have to update only the certificate, not an program. Program updates are usually a PITA, because you don't know what's going to happen.

    Updating Antivirus or Security software is usually painful, because you cannot test the whole program. And when something goes wrong... It requires a lot of manual intervention.

    And yes, having a proxy is less secure, that fact doesn't change.

    But when you'll update a proxy, it's a single component.

    Not multiple Windows / Linux / ... clients with different hardware / software etc.

    So it still sucks, but at least it sucks in a centralized way.

    More nitpicking? ;)
  • 0
    @IntrusionCM Ok, that clarifies things a lot, thx. ^^

    The only thing where I disagree is that I think an https-breaking proxy is a necessary evil in a corporate environment because there's only the choice between confidentiality and security.
  • 1
    @Fast-Nop I agree with that, too...

    It's just that I think that alot of these devices cannot be trusted, which is sad.

    It's a necessary evil... But most of the so called security experts did an real awful job concerning the security of their devices.
  • 0
    Well if everyone hating about Atlassian then show me better tools which work great together as Jira, Confluence and Bitbucket does.
  • 0
    @Tr33 they don’t work great together, they all work shit.

    We pay for confluence but nobody uses it because it’s so slow and unstable.

    We all hate using Jura, but again business bought into it. We don’t even track most of our work on it as it spends more time hindering than helping.

    I moved my team off bitfucket and all our projects over to GitHub because it actually works.

    So sure, they integrate with each other fantastically, but what’s the point in 2”3 tools that integrate but simply don’t work.

    We moved all our documentation to docsify, we’ve added more docs to it in a week than the year we been using confluence, only the business people are actually using it.
  • 0
    I worked in three companies and over 6 giant projects. All used different versions of Jira, Confluence and Bitbucket. But I never hat that feeling that something isn't working right. Everything was fast and smooth.

    For me it looks like everyone is hating because they didn't configurate the projects correctly. To hate about is easy, but to look for the problem or a good solution is not.

    And even everyone is hating, almost everyone is still using the software.
  • 0
    Breaking encryption on the client wont change much... if the client is infected the virus can just scan the memory where the data is already decrypted...
  • 0
    @Tr33 I am complaining because it’s slow and the ux is rubbish. There was a time when it was great, about 5 years or so ago they did a big refactor and implemented react for all the uis and in the process made it a complex unreliable mess.

    I often see stale data, connection issues and timeouts, there are like 4 different menus that do different things with no logical connection to what you are trying to do.

    Adding a list to a Kira story is like winning the bloody lottery. 9 times out of 10 you just get blankness.

    Honestly if they just rewound back to 2015 it would be 10x better than what it is now.
  • 0
    Hard to compare current snake oil products.
    But i would go for the least intrusive one wich checks that box on the compliance checklist...
  • 0
    @Tr33

    I agree that Atlassian products "work together" reasonably well.

    But in my opinion, the great integration between their products does not compensate for the mediocre user experience of the products themselves.

    Both Github and Gitlab also do task management, documentation and VCS in a single platform, and in my opinion they're both better in terms of simplicity, ecosystem and UX.

    Product choice depends on requirements though. For example, solutions like Asana (project management) and Notion (simple kanban & docs) are very "messy and freeform", which works well in a flat organization with high levels of ownership and trust, but not in a highly hierarchical organization where exact access control methods and fine grained management are required.
  • 0
    @Tr33

    Still, for most purposes it's all just fluff -- I've worked at a company where everything was done purely through a Git server (no web UI even), and it worked extremely well.

    Docs were written in markdown, within the code project, so it is version controlled together with the code.

    Tasks were nothing more than git branches. New branches were created by product owners, who would push such a new branch in which the documentation was updated to reflect the new feature requirements. As a dev, you would just do a git diff and read the task requirements from the markdown file.

    Branches also used annotated git tags extensively to communicate task assignments, priorities, and branch statuses -- paired with a bunch of nicely aliased git commands to sort & grep through tag annotations, things like "find me all branches where the latest tag has my email as assignee, and order them by priority"

    For people who aren't bothered by using CLI, their methodology was heavenly.
Add Comment