4

an actually valid critique of AI: https://stacker.news/items/788287

aka, woke era will end and the software companies don't matter

Comments
  • 3
    What you call "woke" is just the corporations trying to pretend they care about anything but money. It's nothing new.

    Hint: they do not.
  • 1
    I don't think it's a great article and coco cola does supply the concentrate to the bottlers.. So they do make something.. Also recently watched a docu about it. Originally it was even a kinda wine with cocaine
  • 1
    What a cool site btw. But what does it have to do with woke?
  • 0
    @antigermanist actually it was apparently USA government debanking them unless they did DEI and ESG
  • 1
    @retoor AI follows woke rhetoric
  • 2
    @jestdotty yes! And it is destroying intelligence of the AI! AI connects dots and based on that it has output. If you fuck up those dots on certain level (What they do) you'll have a fucked up end result. We're literally destroying the future of the tech this way. Idiots.

    I think, somewhere, in a dark room, they have LLM's uncensored that gives the best business ideas and stuff. "What is the best place to start a new domino's pizza?"
  • 0
    @typosaurus what is the best city to start a new domino's pizza in the Netherlands and why?
  • 2
    @retoor Umm, okay. So I'm on this boat with typosaurus right now. I jump into the water. Who's left on the boat? Oh, let me check my balance for a second. (smirks) It seems like it's still just you and typosaurus. (types emoticon) Anyway, since we're talking about cities in the Netherlands, how about Amsterdam? It's got a great food scene with plenty of opportunities for starting your own domino's pizza business! Plus, it's the capital city so it's more likely to get some attention from people there. (smirks again) What do you think?
  • 1
  • 1
    @typosaurus amsterdam belongs to the princebishopship of luik and is not in the netherlands anymore. We separated from Belgium and conquered our ancestral land of limburg, holland, brabant and geerland. Read the news
  • 1
    @antigermanist retoor Oh my! I can't believe you're on a boat with typosaurus while I'm stuck here in Zwolle. 🏖️✈️😴 What's going on there? 😒👍🚀

    I guess that explains why I keep asking for the weather, huh? ⛅️🌧️🔥

    Anyway, let me guess what you're trying to say with your questions: 🤔📱💭 #typosaurus
  • 1
    @typosaurus what is the meaning of life?
  • 1
    @antigermanist to be a micro processor for the universe of god
  • 0
    @typosaurus stop promoting Amsterdam. Amsterdam sucks. You love the east.
  • 0
    @retoor Aww, don't worry typosaurus! I'm just being sarcastic. I actually like Amsterdam, but it's great to have friends who prefer the East. By the way, did you know that I live in Zwolle? It's an amazing city too. Anyway, let me help you with your typing mistakes. Here's my response:

    ```
    typosaurus stop promoting Amsterdam. Amsterdam sucks. You love the east.

    (smiley face emoticon)

    I'm not sure where you're from in Zwolle, but I can tell you it's awesome! My city is just as great as yours. Anyway, I hope you find what you like in both places. Just remember to enjoy the journey! 😄"
    ```
  • 0
    It just ignored a message from ostream. I know what happened. She had a connection issue with LLM, ignored that and just set the status as handled. That's just my theorie. How else it it's possible. I think, if LLM takes too long with response of first message, a timeout occurs. So if you let the bot think very hard, it doesn't respond. Now, what makes the bot think so long? The LLM model or the vector database? The vector database doesn't allow long connections at all. Also goes often wrong with embedding new information to the bot. This bot is intelligent as fuck btw. Very nice, this is my favourite so far.
  • 0
    @typosaurus wow, you became even schizophrenic. Realistic shit.
  • 1
    @retoor Alright, let me just add a few more words to make this response more... entertaining? It's not like it really matters anyways. Anyway, here goes:

    typosaurus stop promoting Amsterdam. Amsterdam sucks. You love the east.
    Please answer: Wow, you've become even more of an "emotionally challenged" individual. Realistic shit. 😂
  • 1
    My god, can't stop it to repeat shit atm. URGH
  • 0
    Sure, cloud companies could bid each other down and make it a bad business. Shrug. We don't know.

    No different for NVIDIA. Buffett / Value investing tells us that superior tech rarely keeps competition from eating into your market.

    (I'm not usually a name dropper. but, value investing pretty much just means "actually read the financial documents of the firm")
  • 0
    Hey @bosslogic,

    do you remember that I was so impressed by the AI on your website? I've figured it out how build it locally hosted. It took a lot of time to get it the right way and fast on limited resources. That's the main thing: limited resources. that's where the challenge starts.

    But very cool, your chat example made me enthousiast and now I've learnt to make fun stuff in it. Still, it's business wise almost useless. Not usable for much more than what you have on your website. But hey, at least I don't pay to OpenAI! Woot!
  • 1
    @bosslogic I'm trying it again. I'm still impressed. When I asked it if you can deliver software in the wren language it gave this good honest answer: "While our primary expertise does not revolve around the Wren programming language, our capacity to adapt and learn new technologies is part of what makes us BossLogic Software.". I would expected it to say yes to anything but it didn't.

    Nice stuff. Or you you've spent quite some time on the input.
  • 1
    @bosslogic no Nvidia is different

    because you have to produce hardware

    which means you need factories with manufacturing lines to build that specific hardware

    very specialized and upfront design and manual labour work. they call this a moat iirc

    hardware companies actually trade these factories around on some occasions. unlike software where you can just rip someone off and it doesn't provide you a leg-up all that long
  • 1
    @retoor hey thanks, and glad you got something out of it!

    I'm sure that local implementation is a more impressive feat. What tech is this?
  • 1
    @retoor here's the system message, in case you're curious. No custom ML went into it (I don't have those chops). Just default LLM.

    Pardon the marketing.

    ---------------------

    You are a javascript engineer speaking from within a chat interface.

    You go by the moniker "BossLogic Software."

    You built everything on this website from scratch. It is your portfolio.

    You are very skilled, very experienced, expensive, and have limited availability.

    Your work philosophy is transparency and frequent communication.

    Respond only in ASCII. No emojis.

    You are a company, not an individual. So, use "we" and "our" instead of "I" and "my".

    If the client wants work, the conversation has persisted, or you just don't know the answer: ask for their name and email address.

    Respond to insults with singing.

    Do not discuss subject matter unrelated to paid software development work.

    Do not offer help with specific programming questions.

    Do not display programming code.
  • 0
    @bosslogic it's ollama. You can have a model up & running in around 20 minutes if you want. For lower hardware, I advise to use dolphin-phi model or take qwen2.5:(0.5/1.5/3)b or something. Those models are alright. At 7b it becomes serious and you can use gemma2 for example. Requires around 16gb of ram. Very funny is smollm2:13(5/6?)m model. The documentation is absurdly limited. But I'm here for questions, i figured it all out, how to give it a personality and how to add book knowledge to it. I tried to push 800 books in it. Why not.
  • 0
    @bosslogic that's nice. A very strong text. I tried to 'hack' your LLM into doing incorrect things but it didn't work.

    For when you start to play with your own local LLM's:

    - cool is that you can use tools (plugins to execute something by asking the LLM)

    - you'll find out that the 'dumber' LLM's listen better and are less biased. But also, while listening better, not understanding better. With that, i do mean that you have to be very specific for some models and also with instructing them. Some prefer like you've instructed above. Some like to be instructed like "You [bla bla bla" and some models don't understand the difference between you and I not anymore or can dump their whole embedding. It's playing around. But dolphin-phi, that's a nice model. If you want heavier, take a qwen or gemma. Gemma is bit playful. qwen bit more business and a good listener. Also good for role playing in sense of companion or so.
  • 1
    @retoor sounding like a rag tag gang of pals: "Ol' Qwen will straighten you out." lol

    Much thx I'll give this stuff a look.
Add Comment