9
Wisecrack
356d

Ideas:

1. Scrape github

2. Attach feature size estimate (an abstract scale) as examples across many projects.

3. Use this as prompt/finetunning data.

4. Train and prompt on project descriptions relative to feature size and number of contributors/changes in the changelog.

5. Package and release a model that takes descriptions of ideas and generates reasonable estimates of time and manpower.

6. Optional, sell as an estimate service to corporate and make money introducing some sanity to the world for a change.

Comments
  • 5
    Why not keep it simple and use a prng, they'll be equally accurate.
  • 1
    So you want to estimates the completion time of something of which you don't even know that it will actually complete.

    If you manage to do that reliably you might get a call from Stockholm.
  • 0
    @Oktokolo that's why it's called an estimate. There is inherent uncertainty. A mechanism for taking task descriptions, and estimating a realistic labor/time budget, based on actual data, I think would be a welcome change.
  • 2
    If you're doing all that you could just have the AI write it
  • 2
    Answer is 42
  • 3
    120 hours to MVP if it's something I've done before. 240 if it's novel to me. Never failed me.
  • 1
    @Wisecrack You likely had the idea for this because humans fail to even roughly hit a nearby order of magnitude all the fucking time.

    The reason for the abyssmal performance of humans is that developing software is a pretty chaotic process - almost like the motion of a double pendulum. In theory it shouldn't be chaotic. But in practice it somehow always is.

    Theoretically you could make an AI be better than the average human by feeding it psychological profiles of all relevant company employees and some company stats. But the former isn't easy to obtain.

    AI is as good as humans predicting the future state of chaotic systems - not at fucking all.
  • 1
    @Oktokolo I have a theory that the average sentence length of a description of a product, relative to the length of the task description, strongly correlates to actual development time, not based on the time it takes to perform the tasks in question, but on the mean length of delays, unknowns to be solved, and unanticipated complications..and I write this because it is what I've observed again and again.

    It's just not formalized.

    The big factor is, has a dev or team *successfully* done a project before?

    That should translate into

    1. a more thorough task description

    2. shorter or longer sentences in the description.

    If it turns out to be true, it does say something about how far out the prediction horizon can be pushed, but it also says that the current methods of estimating can be improved substantially.

    If we want to be absurd, it's almost a tautology of course: "devs w/ experience on IJK, will write better, more thorough descriptions of future projects like IJK."
  • 1
    @Wisecrack I am not sure how big that factor is. But yes, completeness of the initial description (which hopefully relates to length) might be an important factor.

    Maybe you can reliably predict the projects that go straight to moving-goalpost-hell. That certainly is one of the things that delay projects indefinitely.
  • 0
    @jestdotty OP's project is more like what that's best at: estimates. Coding with AI is only a half step over coding via dartboard. They don't actually remember things like context, they're just the biggest state machines for text anyone's made yet, and it knows what words are commonly linked together. Add a little RNG so it doesn't pick the same word over and over and you've made the latest GPT flavor.
Add Comment