Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "human interface"
-
Welcome back to practiseSafeHex's new life as a manager.
Episode 2: Why automate when you can spend all day doing it by hand
This is a particularly special episode for me, as these problems are taking up so much of my time with non-sensical bullshit, that i'm delayed with everything else. Some badly require tooling or new products. Some are just unnecessary processes or annoyances that should not need to be handled by another human. So lets jump right in, in no particular order:
- Jira ... nuff said? not quite because somehow some blue moon, planets aligning, act of god style set of circumstances lined up to allow this team to somehow make Jira worse. On one hand we have a gigantic Jira project containing 7 separate sub teams, a million different labels / epics and 4.2 million possible assignees, all making sure the loading page takes as long as possible to open. But the new country we've added support for in the app gets a separate project. So we have product, backend, mobile, design, management etc on one, and mobile-country2 on another. This delightfully means a lot of duplication and copy pasting from one to the other, for literally no reason what so ever.
- Everything on Jira is found through a label. Every time something happens, a new one is created. So I need to check for "iOS", "Android", "iOS-country2", "Android-country2", "mobile-<feature>", "mobile-<feature>-issues", "mobile-<feature>-prod-issues", "mobile-<feature>-existing-issues" and "<project>-July31" ... why July31? Because some fucking moron decided to do a round of testing, and tag all the issues with the current date (despite the fact Jira does that anyway), which somehow still gets used from time to time because nobody pays attention to what they are doing. This means creating and modifying filters on a daily basis ... after spending time trying to figure out what its not in the first one.
- One of my favourite morning rituals I like to call "Jira dumpster diving". This involves me removing all the filters and reading all the tickets. Why would I do such a thing? oh remember the 9000 labels I mentioned earlier? right well its very likely that they actually won't use any of them ... or the wrong ones ... or assign to the wrong person, so I have to go find them and fix them. If I don't, i'll get yelled at, because clearly it's my fault.
- Moving on from Jira. As some of you might have seen in your companies, if you use things like TestFlight, HockeyApp, AppCenter, BuddyBuild etc. that when you release a new app version for testing, each version comes with an automated change-log, listing ticket numbers addressed ...... yeah we don't do that. No we use this shitty service, which is effectively an FTP server and a webpage, that only allows you to host the new versions. Sending out those emails is all manual ... distribution groups?? ... whats that?
- Moving back to Jira. Can't even automate the changelog with a script, because I can't even make sense of the tickets, in order to translate that to a script.
- Moving on from Jira. Me and one of the remote testers play this great game I like to call "tag team ticketing". It's so much fun. Right heres how to play, you'll need a QA and a PM.
*QA creates a ticket, and puts nothing of any use inside it, and assigns to the PM.
*PM fires it back asking for clarification.
*QA adds in what he feels is clarification (hes wrong) and assigns it back to the PM.
*PM sends detailed instructions, with examples as to what is needed and assigns it back.
*QA adds 1 of the 3 things required and assigns it back.
*PM assigns it back saying the one thing added is from the wrong day, and reminds him about the other 2 items.
*QA adds some random piece of unrelated info to the ticket instead, forgetting about the 3 things and assigns it back.
and you just continue doing this for the whole dev / release cycle hahaha. Oh you guys have no idea how much fun it is, seriously give it a go, you'll thank me later ... or kill yourselves, each to their own.
- Moving back to Jira. I decided to take an action of creating a new project for my team (the mobile team) and set it up the way we want and just ignore everything going on around us. Use proper automation, and a kanban board. Maybe only give product a slack bot interface that won't allow them to create a ticket without what we need etc. Spent 25 minutes looking for the "create new project" button before finding the link which says I need to open a ticket with support and wait ... 5 ... fucking ... long ... painful ... unnecessary ... business days.
... Heres hoping my head continues to not have a bullet hole in it by then.
Id love to talk more, but those filters ain't gonna fix themselves. So we'll have to leave it here for today. Tune in again for another episode soon.
And remember to always practiseSafeHex13 -
Every hour or so someone shows up at my desk requesting a query to be executed.
I feel like I'm just a human-sql interface.3 -
smh.
Why hire a Native iOS Developer when you're going to let him develop an iOS app to look EXACTLY like its Android counterpart?
There's hybrid and cordova for that if you want your apps to look and feel exactly the same.
Can't we all just agree to appreciate the distinct differences in UI for both platforms? Navigation especially? Copy context not UI.undefined android copy context not ui ios apple google human interface design material design hybrid -
So for those of you keeping track, I've become a bit of a data munger of late, something that is both interesting and somewhat frustrating.
I work with a variety of enterprise data sources. Those of you who have done enterprise work will know what I mean. Forget lovely Web APIs with proper authentication and JSON fed by well-known open source libraries. No, I've got the output from an AS/400 to deal with (For the youngsters amongst you, AS/400 is a 1980s IBM mainframe-ish operating system that oriiganlly ran on 48-bit computers). I've got EDIFACT to deal with (for the youngsters amongst you: EDIFACT is the 1980s precursor to XML. It's all cryptic codes, + delimited fields and ' delimited lines) and I've got legacy databases to massage into newer formats, all for what is laughably called my "data warehouse".
But of course, the one system that actually gives me serious problems is the most modern one. It's web-based, on internal servers. It's got all the late-naughties buzzowrds in web development, such as AJAX and JQuery. And it now has a "Web Service" interface at the request of the bosses, that I have to use.
The programmers of this system have based it on that very well-known database: Intersystems Caché. This is an Object Database, and doesn't have an SQL driver by default, so I'm basically required to use this "Web Service".
Let's put aside the poor security. I basically pass a hard-coded human readable string as password in a password field in the GET parameters. This is a step up from no security, to be fair, though not much.
It's the fact that the thing lies. All the files it spits out start with that fateful string: '<?xml version="1.0" encoding="ISO-8859-1"?>' and it lies.
It's all UTF-8, which has made some of my parsers choke, when they're expecting latin-1.
But no, the real lie is the fact that IT IS NOT WELL-FORMED XML. Let alone Valid.
THERE IS NO ROOT ELEMENT!
So now, I have to waste my time writing a proxy for this "web service" that rewrites the XML encoding string on these files, and adds a root element, just so I can spit it at an XML parser. This means added infrastructure for my data munging, and more potential bugs introduced or points of failure.
Let's just say that the developers of this system don't really cope with people wanting to integrate with them. It's amazing that they manage to integrate with third parties at all...2 -
From a little bit heated discussion I want to extract this: One big pain in the ass is the human to computer interface. Maybe it's the natural vs. formal language divide, but there's a mismatch deeper than between object and relational models that no ORM can failingly fix.
The whole point of the discussion was on such a point where some wanted an interface more human friendly and I stubbornly insisted on the way it is simple for the computer system. Like not too much human messiness should invade machine. One argument sounded as if human words were like unicode code points which meaning doesn't depend on its representation.
That's raising red flags to me: Nonono, natural language is too messy, keep it out. This poor machine could have been so clean and well designed and we already stacked up so much entropy we still dare to call OS,..
Dunno, what's your stance? Still hoping that your shell one day will be able to process our poor standard English? Or do you think, like me, all those failed attempts show there's a gap you should not even touch?5 -
So today my teacher told me to do that project for some competition or something(frankly, I don't remember clearly what this is for). He gave us the machines we need, the CDs with the systems we have to work with. We are supposed to make a properly working Beowulf cluster from the things I've been given.
Well, no.
Fucking no.
I am really okay with making this the way my teacher wants us to do. I am okay with installing an ubuntu 16.04 server that is completly irrevelant to the project, because it's not part of the cluster. I am really okay with using some weird linux distribution on the master nobody has ever heard of. But I'm not okay when the software we've been given(including operating system) has seven pages of documentation, escpecially when fucking screenshoots of how PXE booting should look like are roughly 70% of it. No, I couldn't find a thing on the internet about it. I couldn't read the fucking manual. There was no fucking manual. There was no fucking --help. There was no motherfucking english language. Everything was motherfucking spanish, including that 7 pages long document that was supposed to guide us through our work. It was planned to be done until march. The only reason I can think of about why doing the stuff the document tells us to do would take four motherfucking months is that we'd have to learn spanish to do this. And I'm not going to do that. Not because I don't like spanish or learning. Simply because I didn't sign up for this to learn languages.
And no. I can't switch to other, human purposed software. I am only allowed to use the things the teacher has given us. Because somebody has worked on it already couple of years ago and they had left a pdf file about how to install that ubuntu server I've been writing about a while ago. Which, by the way, was the "installation guide for animals". Showing how to install a system, screenshoot after screenshot.
It took about an hour to figure out the thing supposed to handle pxe booting computers all the time was telling us that it can't work because we had to configure ethernet interface manually. Because why the fuck not. -
I like iOS human interface design much more than the Android's material design, though I am an Android user. It has some unimportant design components IMO. I feel that the default action bar title is too big. Also that floating action button is weird sometimes. It hides some important portion of the contents in poorly designed apps.
-
Can an app built with Cordova, designed without following Apple's Human Interface Guidelines (so basically a responsive website), be published on the App Store?2
-
It's a form of artistic expression for some people (like me) who aren't as great with paper and pen but still have ideas and patterns and concepts and abstractions to express.
Watching the data just flow through the pipelines and pathways you've laid down for it, creating spectacles from what is essentially electricity running through a rock. Being able to create an interface between a human mind and an inanimate dead block of dug out and processed ore, feels like tapping into the metaphysical.
(Yeah I'm pretentious with words) -
The Turing Test, a concept introduced by Alan Turing in 1950, has been a foundation concept for evaluating a machine's ability to exhibit human-like intelligence. But as we edge closer to the singularity—the point where artificial intelligence surpasses human intelligence—a new, perhaps unsettling question comes to the fore: Are we humans ready for the Turing Test's inverse? Unlike Turing's original proposition where machines strive to become indistinguishable from humans, the Inverse Turing Test ponders whether the complex, multi-dimensional realities generated by AI can be rendered palatable or even comprehensible to human cognition. This discourse goes beyond mere philosophical debate; it directly impacts the future trajectory of human-machine symbiosis.
Artificial intelligence has been advancing at an exponential pace, far outstripping Moore's Law. From Generative Adversarial Networks (GANs) that create life-like images to quantum computing that solve problems unfathomable to classical computers, the AI universe is a sprawling expanse of complexity. What's more compelling is that these machine-constructed worlds aren't confined to academic circles. They permeate every facet of our lives—be it medicine, finance, or even social dynamics. And so, an existential conundrum arises: Will there come a point where these AI-created outputs become so labyrinthine that they are beyond the cognitive reach of the average human?
The Human-AI Cognitive Disconnection
As we look closer into the interplay between humans and AI-created realities, the phenomenon of cognitive disconnection becomes increasingly salient, perhaps even a bit uncomfortable. This disconnection is not confined to esoteric, high-level computational processes; it's pervasive in our everyday life. Take, for instance, the experience of driving a car. Most people can operate a vehicle without understanding the intricacies of its internal combustion engine, transmission mechanics, or even its embedded software. Similarly, when boarding an airplane, passengers trust that they'll arrive at their destination safely, yet most have little to no understanding of aerodynamics, jet propulsion, or air traffic control systems. In both scenarios, individuals navigate a reality facilitated by complex systems they don't fully understand. Simply put, we just enjoy the ride.
However, this is emblematic of a larger issue—the uncritical trust we place in machines and algorithms, often without understanding the implications or mechanics. Imagine if, in the future, these systems become exponentially more complex, driven by AI algorithms that even experts struggle to comprehend. Where does that leave the average individual? In such a future, not only are we passengers in cars or planes, but we also become passengers in a reality steered by artificial intelligence—a reality we may neither fully grasp nor control. This raises serious questions about agency, autonomy, and oversight, especially as AI technologies continue to weave themselves into the fabric of our existence.
The Illusion of Reality
To adequately explore the intricate issue of human-AI cognitive disconnection, let's journey through the corridors of metaphysics and epistemology, where the concept of reality itself is under scrutiny. Humans have always been limited by their biological faculties—our senses can only perceive a sliver of the electromagnetic spectrum, our ears can hear only a fraction of the vibrations in the air, and our cognitive powers are constrained by the limitations of our neural architecture. In this context, what we term "reality" is in essence a constructed narrative, meticulously assembled by our senses and brain as a way to make sense of the world around us. Philosophers have argued that our perception of reality is akin to a "user interface," evolved to guide us through the complexities of the world, rather than to reveal its ultimate nature. But now, we find ourselves in a new (contrived) techno-reality.
Artificial intelligence brings forth the potential for a new layer of reality, one that is stitched together not by biological neurons but by algorithms and silicon chips. As AI starts to create complex simulations, predictive models, or even whole virtual worlds, one has to ask: Are these AI-constructed realities an extension of the "grand illusion" that we're already living in? Or do they represent a departure, an entirely new plane of existence that demands its own set of sensory and cognitive tools for comprehension? The metaphorical veil between humans and the universe has historically been made of biological fabric, so to speak.7