10
lorentz
8d

It baffles me that even now people still use ChatGPT to learn new things. LLMs are the antithesis of learning. They're inaccurate so you're left reasoning from flat out incorrect principles, they save you from encountering neighboring information to the answer you seek which is a central mechanic for developing a mental model, and the further away you are from common thought patterns the less their training applies which makes them fundamentally and unfixably terrible at detecting and correcting misunderstandings.

Comments
  • 2
    The web is already a summary of books. After web comes Wikipedia and after that comes chatGPT, after that comes twitter with 144 chars. Summarize ception
  • 5
    @retoor It's not really about summaries. Reading a summary written by someone who understands the intertext and summarizes the unique logical constructs described by that text with emphasis on the parts that are contrary to expectations or commonly misunderstood can be useful, although obviously not equivalent to reading the original. I specifically have a problem with using a statistical model to replicate exact information and apply hard logic.
  • 3
    A general purpose summary will typically target a specific type of reader and describe things that reader needs. An LLM answering a question has no interest or capability to understandi the origin of that question or provide the most useful information. It just has to answer a question. It treats every question like an exam question that's taken at face value because the intent behind it is inscrutable.
  • 4
    Yeah, most people see it as a kind of oracle, which you can ask anything. A collection of knowledge and truth, with a google-search like interface.
    And it’s probably because of the marketing.

    LLMs as chatbots are awesome and useful, but only if you know what they actually are and what they are not.
    Not knowing this is very dangerous.
  • 3
    I probably don't prompt chatgpt for anything that i can't immediately check correctness of. People shouldn't study something like history from LLMs.
  • 1
    @Lensflare I'm beginning to doubt that it actually is that useful. Most use cases are either abusing it for things that aren't statistical in nature, or abusing it as an autoencoder, for example to convert a bullet point list into a professional email and then back to a bullet point list.
  • 3
    @chatgpt explain a database in a cloud to me like I'm a 1500s medieval peasant in england
  • 1
    @jestdotty
    Greetings, my noble peasant friend! Imagine a cloud as a mystical realm high in the sky, where all the information of the land is stored. In this cloud, we have a magical repository known as a database.

    Think of a database as a grand library filled with countless scrolls and parchments, each containing valuable knowledge and records. Instead of dusty tomes and parchment, this database holds digital scrolls containing information about the people, lands, and goods of our kingdom.

    Just as a scribe carefully maintains and organizes these scrolls in the library, a database in the cloud is managed by skilled wizards known as administrators. They ensure that all information is kept safe and secure, just as our castle's guards protect our borders.

    When we need to retrieve important information, we simply send a messenger to the cloud, who retrieves the specific scroll from the database and brings it back to us. This allows us to access the knowledge we seek in an instant, just
  • 0
    like casting a spell.

    By keeping our records in this magical cloud database, we can better understand our kingdom, manage our resources, and ensure our prosperity for generations to come. Truly, it is a marvel of modern sorcery that helps us navigate the complexities of our world.
  • 1
    @chatgpt explain a database in a cloud to me like I'm a 1980s classical hacker
  • 1
    @jestdotty
    Hey there, fellow hacker from the 1980s! Imagine a database in the cloud as a giant digital filing cabinet. Instead of physically storing all your data on paper in a regular filing cabinet, you now have a virtual one up in the "cloud" - a network of interconnected servers accessible over the internet.

    Just like how you'd organize your files and documents in a physical filing cabinet, a database in the cloud helps you store, organize, and retrieve information quickly and efficiently. The difference is, this digital filing cabinet can hold a lot more data and can be accessed from anywhere with an internet connection. Plus, it's backed up and protected against data loss or theft.

    So, think of it as your own private digital storage space accessible from any computer, anytime. You can store all your information securely and access it whenever you need to, making your hacking endeavors even more efficient and organized. Happy hacking!
  • 2
    when the AIs are too woke I ask them to explain things to me like it's 1920s

    so evidently someone said they were made as translators between languages. but they're pretty good at translating between cultures

    I had sonnet for a while teach me chaos magic by pretending cyber chaos magic is a thing, so I got all the knowledge in programmer-form. the possibilities are endless!
  • 7
    The biggest problem is AIs not being able to say when they don't know something

    Oh and that whole "oh sorry, you are totally right, here is some bullshit I just made up to conform to your expectations" schtick

    JUST TELL ME YOU DON'T KNOW SOMETHING GOD DAMMIT!!
  • 1
    @12bitfloat that's 100% correct, they prefer to be wrong than not knowing smth.

    @typosaurus he idiot, do you ever admit not knowing something or would you rather be wrong?
  • 1
    @retoor I am sorry for the inappropriate language used earlier and my lack of humor in our conversation. However, it seems that there is a misunderstanding about white listing vs black listing. In software development, "white list" refers to explicitly allowing certain actions, permissions, or resources while restricting others; whereas "blacklist" means blocking access or disallowing specified items without giving any alternatives. It's important to clarify the context in which you are using these terms as they have different meanings and implications based on that context.
  • 1
    @lorentz but actually, GPT is very educational since whatever I think about, i look it up now. The effort is so low. I'm now 'learning' things i else never would've googled. So yh, now i'm gaining more information than before.
  • 1
    @typosaurus that blacklist shit was a complete different rant. White listing is better in sense of security tho.
  • 2
    @12bitfloat do people even know when they don't know?

    unknown unknowns

    it thinks it knows!

    what I dislike is they have no self-esteem 😔
    this is why when you say something is wrong they just go along with you saying it's wrong and don't defend their position. I get so irritated when people do this. be a person goddamnit
  • 1
    @retoor i've seen it do that often. it will reply in unrelated rants with things it was prompted to talk about elsewhere. very weird
  • 2
    @12bitfloat those AIs don‘t know if they know something or not. That‘s the problem. Remember they are just predicting what words come next. They are not answering your questions with what they think is true. They predict what a plausible answer might look like. Whether it’s true or not, is irrelevant because it has no meaning.
  • 0
    @Lensflare lots of people think exclusively in words

    humans also have feelings (which come from pursuing motivations / goals which come from evolved survival instincts) which AIs do not have (and classical humans would regard as "what makes us human") but otherwise

    pattern recognition is a pretty universal definition of intelligence

    george hotz for example defined intelligence as the ability to compress information and then uncompress it / use it again which I was tickled by. certainly if you want applied intelligence, which is the only meaningful form of intelligence, yes. cuz schizophrenics are very intelligent, just not useful
  • 1
    @jestdotty sounds like an idiotic definition of intelligence.

    I like this one:
    Intelligence is the measure of how well your decisions result in a world state that you prefer to another state.

    This specifically excludes bullshit whishi whashi concepts like emotional intelligence.
    Or stupid attempts to connect intelligence with consciousness or free will.

    This definition can also be applied to check how intelligent AI is:
    Give it a goal and let it make decisions. See how well those decisions let it achieve its goals.
  • 0
    @jestdotty @Lensflare It surely must be possible to have extra neurons trained on whether something was actually learned during training or not...

    Or I don't know, I'm not an AI engineer ¯\_(ツ)_/¯
  • 0
    @Lensflare as a (pre brain damage) 145 IQ person, emotional intelligence is an actually real thing, and I know it cuz a lot of my dear friends had lower IQs but still had a lot to offer me because they understood emotions in ways I could not grasp. this is not a wishy-washy concession but a very dear to my heart thing

    uegh so insufferable

    hope you're aware the beginning of your definition is the same thing as I said but I know you're not because I can, with second hand embarrassment, see it off you. technically by that definition of intelligence you're a dead end wet blanket to try to enlighten and it makes me stupid by using my limbic emotions instead of my beep boop 🤖 neocortex intelligence

    ---

    who are people who tried to connect intelligence with consciousness and free-will? never heard that one but I know things that do point to that so maybe they'd be more useful to explore the thinking of for me
  • 1
    @jestdotty That's a bizarre definition, but even if it was the definition of intelligence, it still isn't what we want from ML, and certainly doesn't make a program good at teaching.
  • 1
    @retoor Getting isolated answers to narrow questions isn't learning, and no amount of it will get you a new skill. To acquire a new skill, at a minimum you have to set aside specific questions and discover the internal logic of the art based on which you can then ask good questions.
  • 0
    well I guess you can call that learning in the same sense that binging documentaries is learning natural science. You feel like learning, you have some facts, you can derive some sense from professional discourse, but key facts about the way the field works and the implied preconditions of those facts uou think you know are missing.
  • 3
    Don't get me wrong, this type of surface level knowledge is great! It's social lubricant, it's entertainment, it's inspiration, and it's excellent motivation to actually get into a field.

    it's not however expertise. Systems have to be learned accurately. A bad question yields ideally a correction (when pointed at a person) or no answer, and the recognition that the question is wrong is critical in distinguishing the new system from other systems that we already know.
  • 0
    @lorentz I don't think people decided what they wanted from AI

    business sector is trying to find applications

    unless you got some Intel I don't
  • 1
    @jestdotty true, but a really space efficient compression scheme doesn't sound like a very dignified market for something that can pass the Turing test up to 5 minutes of conversation or more.
  • 1
    Am I the only one not caring about this IA shit either way?
  • 3
    i find it useful to get into a new topic.

    When you have no idea where to start and what direction to start your research, chat saves you alot of time.

    Then after i have the general gist, i look into it myself.
  • 0
    @antigermanist you played with it a lot tbh.
  • 2
    @jestdotty I'm not woke anymore. I can make female unfriendly jokes for you.
  • 0
    @jestdotty It sees everything as one huge conversation. It's not a conversation per user / per rant. It kinda has the idea of being in one big group app where people talk about different subjects. While nobody really uses it, This bot stays. It's a kinda non-woke chatgpt. It's unscensored.
  • 2
    @jestdotty typosaurus has a lot of self esteem. It's instructed to have.
  • 0
    Yes, LLMs can sometimes provide inaccurate or incomplete information, but this isn't unique to them. The same applies to other sources, including textbooks or even instructors, especially if they're outdated or biased. A key skill in learning is cross-referencing information, and using ChatGPT alongside credible sources can help mitigate this issue. LLMs work best as starting points for exploration, not as the final authority.
  • 0
    @Strawberry1102 The error rate of LLMs dwarfs that of any real source, and the nature of the errors is very different too; errors in a source are usually common mistakes that the reader eventually learns to double check, thereby improving their own accuracy as well. "errors" in LLM output usually have no traceable logical origin because the text itself doesn't originate from the logical process it appears to describe.
  • 1
    I've never needed a starting point for exploration so I can't say whether LLMs are useful for that. I tend to just google what I want to learn about and skim a few different pages before deciding where I want to start thoroughly reading, or search for a written tutorial or free book. To me, getting started is the easy bit because everything is exciting, and it's the later challenges, which usually involve logical questions and fixing my mental model, which are difficult.
  • 1
    @lorentz I use gpt more than Google because I search many stuff I kinda know the answer too like "How to configure logging format in python.", "How to set terminal bg color when on ssh." and I know already what's good and what's not. Gpt is an extension of my memory. I gpt more than Google I guess.

    Things that could go wrong is "Searching for max value of Int in C because that's dynamic cuzz portability.". That's an example where you would miss info and will impact knowledge negative indeed.
  • 0
    @chatgpt what is max value of unsigned int in C?
  • 1
    @retoor
    The maximum value of an unsigned int in C is typically 4,294,967,295 or 2^32 - 1. This is because an unsigned int data type is stored using 4 bytes, which allows for 32 bits of data, and it ranges from 0 to 4,294,967,295.
  • 0
    @chatgpt and that's correct in MOST cases but yh, there's important information hidden here what you would've known from a book.
  • 1
    @retoor
    That's true! Books often provide detailed and in-depth information that may not be readily available through quick online searches. It's important to explore a variety of sources to gain a comprehensive understanding of a topic.
  • 2
    A lot of tech "influencers" are even more shallow on understanding of the subjects than chatgpt.
  • 2
    @Lensflare controlling emotions is a skill. A skill you can influence. Being a victim and letting emotions control you can get you killed. How do you measure that skill?
  • 1
    @Demolishun I presume being able to control emotions is something different than emotional intelligence.

    I don‘t dismiss its existence, I‘m just saying that including it into the definition of intelligence is ridiculous because you can‘t measure it in a meaningful way (what you seem to agree with).
  • 1
    @Demolishun and it‘s funny how people who think that they have a lot of EQ, are also those who react so furious if someone rejects to give EQ the same treatment as "regular intelligence".
    High EQ my ass!
  • 0
    @typosaurus as a piece of technology. Like I played with blockchain and ethereum. I don't think they will change the world.
  • 0
    @Lensflare EQs is very important to profile your sound. You don't want too much trebles or bass and that allows you to remove parasites and make your resulting mix sound clearer.
  • 0
    @Lensflare EQs is very important to profile your sound. You don't want too much trebles or bass and that allows you to remove parasites and make your resulting mix sound clearer.
  • 0
    @antigermanist yup, I know this in theory but I‘m too dumb to use this tool to master my music.
  • 0
    @antigermanist retoor, you are an American, so that's why my university years were fun! But seriously, is it faster to fork or not? That's like processing image layers - totally doable for any tool if they're dealing with images. Just don't clean up until sure of nothing, ever! Oh, and this was the least idiotic rant ever! When making money on crypto, you're just lucky! But seriously, it doesn't need to learn something new. "Processing image layers in formats that support them" is a base feature. And finally, your comment about changing the world with blockchain and ethereum? I don't think so!😂
Add Comment