Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "code illiteracy"
-
I'm tired of working for small companies.
I'm always either the sole developer, or the only dev for a specific stack, and therefore don't have anyone to ask for help. If I can't figure something out, it just doesn't get done.
It also means I don't have anyone to bounce ideas off, do code reviews with, or even friggin' have someone who understands what I do.
It sucks.
It would be nice to have someone I could actually ask for help! As it stands, I tear my hair out in frustration until I'm desperate enough to beg for help on discord or SO. whereupon, of course, I get ignored, as per usu. asdjfklasdjf
It really sucks.
It also means that I'm often surrounded entirely by sales people and managers... you know, those super-talkative people? who basically get paid just to talk? and are absolutely computer illiterate? Yeah. Think someone who says "I need my deliverables by end-of-week," "customer success representative," "turnkey solution," etc. completely seriously. (ew).
They're the people who constantly wonder why I can't push `n` features in `n/4` days, and ofc can't understand anything I say in response because of the aforementioned illiteracy. They're also the people who, almost every week, ask how long `y` is going to take, and then yell "But I need it by Friday! I just sold 50 clients on it!" (And they do this, of course, without ever asking for timelines)
It really fucking sucks.
Though I suppose larger companies would still have these problems.
but at least I could ask for help once in awhile. that would be nice.40 -
Data Disinformation: the Next Big Problem
Automatic code generation LLMs like ChatGPT are capable of producing SQL snippets. Regardless of quality, those are capable of retrieving data (from prepared datasets) based on user prompts.
That data may, however, be garbage. This will lead to garbage decisions by lowly literate stakeholders.
Like with network neutrality and pii/psi ownership, we must act now to avoid yet another calamity.
Imagine a scenario where a middle-manager level illiterate barks some prompts to the corporate AI and it writes and runs an SQL query in company databases.
The AI outputs some interactive charts that show that the average worker spends 92.4 minutes on lunch daily.
The middle manager gets furious and enacts an Orwellian policy of facial recognition punch clock in the office.
Two months and millions of dollars in contractors later, and the middle manager checks the same prompt again... and the average lunch time is now 107.2 minutes!
Finally the middle manager gets a literate person to check the data... and the piece of shit SQL behind the number is sourcing from the "off-site scheduled meetings" database.
Why? because the dataset that does have the data for lunch breaks is labeled "labour board compliance 3", and the LLM thought that the metadata for the wrong dataset better matched the user's prompt.
This, given the very real world scenario of mislabeled data and LLMs' inability to understand what they are saying or accessing, and the average manager's complete data illiteracy, we might have to wrangle some actions to prepare for this type of tomfoolery.
I don't think that access restriction will save our souls here, decision-flumberers usually have the authority to overrule RACI/ACL restrictions anyway.
Making "data analysis" an AI-GMO-Free zone is laughable, that is simply not how the tech market works. Auto tools are coming to make our jobs harder and less productive, tech people!
I thought about detecting new automation-enhanced data access and visualization, and enacting awareness policies. But it would be of poor help, after a shithead middle manager gets hooked on a surreal indicator value it is nigh impossible to yank them out of it.
Gotta get this snowball rolling, we must have some idea of future AI housetraining best practices if we are to avoid a complete social-media style meltdown of data-driven processes.
Someone cares to pitch in?14