5
respex
8d

My first prompt with grok, for giggles:
"how to disable grok"
This is a certified black mirror classic.

Comments
  • 1
    I've just been using Ollama on my 3080-Ti. Running the smaller models are slightly more wrong that the big cloud ones, but they're all just bullshit machines anyway so who cares.

    You can use the Continue plugin to connect to Ollama from IntelliJ, VSCode and others.
  • 1
    @djsumdog you can do fun function calling with the smaller qwen models. I have created a cli application that can execute bash commands making it a bash that requires plain English as input to perform actions. I can literally say "yo yo yo list all python files that are not committed yet sweetii" and it will do that.

    I made a vibe code cli that works very well. I already generated a Rust ollama cli client with it and a C# Snek notification service for Xfce. My application can read, wrote, list files and control the terminal.

    Ollama / AI development gets fun when you start to implement function calling.

    My chatbot can downvote spam multiple times on command and add devrant spammers to spam list.

    Also, easy to make: set as system message the sql schema of your database and inform the dialect. Now it will generate queries will the correct multiple joins if you ask to fetch certain data. It's a lot of fun.

    Also tip: use json to communicate and supply response format example.
Add Comment