8

I had to make a reverse-geocode Service in C#.
I made a new empty .cs file, copy-pasted the API OpenStreetMaps endpoint as a comment for reference

Pressed enter.

n BAM. Copilot (in Visual Studio) autosuggests the ENTIRETY of the class with accurate JSON-POCO conversion ._.
I dislike AI but can't deny its usefulness in this kinda manual work. I'd take this over some low-tier junior dev anyday

Comments
  • 3
    I'm guessing most of its knowledge comes from Github, coz before OpenStreetMaps, I used a newer, more niche Mapping service whose documentation is horrid (fuckers' auto-generated documentation shows return-types as Object[] object)

    and copilot suggested a 80% accurate object-definition there too, barring some non-existing class that I had to make looking at their JSON responses
  • 0
    Yea, it's kinda scary. I enabled Copilot in VS about a month ago and it is like its reading my mind.

    I've been finding myself relying on Copilot more and more for the POCO type plumbing code.

    Right now I am moving code from the depreciated Microsoft Teams Webhook to PowerAutomate/Workflow. I started to type some of the AdativeCard plumbing and *BAM!*, it somehow knew the exact fields and how I was going to use it.

    I almost said out loud "Get behind me satan!"
  • 2
    Yeah, It's almost as if *transformers* were made to transform input and not to generate creative ideas, new code based on generic prompts or to sext with people.

    I don't hate AI, I hate that It's being abused. Square peg forced into the round holes of all the users...

    But use it correctly and you get reliable outputs. Even 1b models can do perfect mappings, formatting and info destilation most of the time, and you can run it on a cpu with great speed or on a consumer gpu instantly.
  • 0
    well it's boilerplate basically so makes sense

    sometimes I try to get it to spit out the usability docs before I use a library. sometimes it works
  • 1
    Had an interesting result with Gemini for VSCode. The test fails using it's own generated unit test of [12, 11, 10, 1, 2, 3, 4, 5, 6, 7, 8, 9]. It insists the answer is 9 when the answer is 12. Even after providing it's own solution (attached screenshot), the answer is 12. After several attempts, it still produces the same code that fails it's own test.
  • 0
    I think open ai way back accomplished a good product for reference
  • 0
    I don't want ai inserting whole blocks of code
  • 0
    Chatgpt is really good for "solving" problems that already have been solved
  • 0
    And using that code like a customizable template

    I likes it
Add Comment