• Scubus@sh.itjust.works
    link
    fedilink
    arrow-up
    11
    arrow-down
    19
    ·
    1 day ago

    Ok i doubt anyone is going to be willing to have this discussion, but here i am. My assessment is as follows: it would seem to me that to be of value, “ai” doesnt need to be perfect, it just needs to be better than the average programmer. If it can produce the same quality code twice as fast, or if it can produce code thats twice as good in the same amount of time. If I want it to code me a video game, i would personally judge it by how well it does against what i would expect from human programmers. Currently there is no comparison, im no coding expert but even i find myself correcting ai for even the simplest of code, but thats only temporary. Ten years ago this tech didnt even exist, ten years from now(assuming it doesnt crash our economy in more ways than one) i would imagine the software will at least be comparable to an entry level programmer.

    I guess what Im getting at is that people rail against ai for faults that a human would make worse. Like self driving cars, having seen human drivers i am definitely wanting that tech to work out. Obviously its ideal for it to be perfect and coordinate with other smart cars to reduce traffic loads and inprove safety for everyone. But as long as its safer than a human driver, i would prefer it. As long as it codes better than your average overworked unpaid programmer, it becomes a useful tool.

    That being said, I do see tons of legitimate reasons to dislike AI, especially in its current form. A lot, id say most, of those issues dont actually lie with AI at all, or even with llms. Most of the issues ive heard with AI development are actually thinly veiled complaints about capitalism, which is objectively failing even without AI. The others are mostly complaints about the current state of the tech, which i find to be less valid. Its like complaining that your original ipod didnt have lidar built in like they do now. Nixing the capitalism issue about how this tech will be used, and how its currently being funded, and its environmental impacts, and the fact that this level of research is unsustainable and will collapse the economy, give the tech time and it will mature. That almost feels like sarcasm given those very real issues, but again, those are all capitlism issues. If we were serious about saving our planet, a guardian AI that automatically drone strikes sorices of intense pollution would go a long way. If youre worried about robots takin yer jerbs, try not being capitalism-pilled and realise that humans got by for eons without jobs or class structures. Post scarcity is almost mandatory under proper AI, and capitlism exists to ensure that post scarcity cant happen.

    • NaibofTabr@infosec.pub
      link
      fedilink
      English
      arrow-up
      42
      arrow-down
      1
      ·
      edit-2
      1 day ago

      AI coding tools can do common, simple functions reasonably well, because there are lots of examples of those to steal from real programmers on the Internet. There is a large corpus of data to train with.

      AI coding tools can’t do sophisticated, specific-case solutions very well, because there aren’t many examples of those for any given use case to steal from real programmers on the Internet. There is a small corpus of data to train with.

      AI coding tools can’t solve new problems at all, because there are no examples of those to steal from real programmers on the Internet. There is no corpus of data to train with.

      AI coding tools have already ingested all of the code available on the Internet to train with. There is no more new data to feed in. AI coding tools will not get substantially better than they are now. All of the theft that could be committed has been committed, which is why the AI development companies are attempting to feed generated training material into their models. Every review of this shows that it makes the output from generative models worse rather than better.

      Programming is not about writing code. That is what a manager thinks.
      Programming is about solving problems. Generative AI doesn’t think, so it cannot solve problems. All it can do is regurgitate material that it has previously ingested which is hopefully close-ish to the problem you’re trying to solve at the moment - material which was written by a real thinking human that solved that problem (or a similar one) at some point in the past.

      If you patronize a generative AI system like Claude Code, you are paying into, participating in, and complicit in, the largest example of labor theft in history.

      • undeffeined@lemmy.ml
        link
        fedilink
        arrow-up
        23
        ·
        1 day ago

        Programming is not about writing code… Programing is about solving problems

        Very well put. Good managers know that but those are very rare… And the whole top management in the world is completely bought into this snake oil, makes me feel insane.

      • Scubus@sh.itjust.works
        link
        fedilink
        arrow-up
        3
        arrow-down
        5
        ·
        18 hours ago

        Im not entirely convinced this is accurate. I do see your point and i had not considered that there is no more training data to use, but at the end of the day our current ai is just pattern recognition. Hence, would you not be able to use a hybrid system where you set up billions of use cases(translate point a to point b, apply a force such that object a rolls a specified distance, set up a neural network using backpropogation with 3 hidden layers, etc) and then have two adversarial ais. One of which attempts to “solve” that use case by randomly trying stuff, and the other basically just says “youre not doing good enough and heres why”. Once your first is doing a good job with that very specific use case, index it. Now when people ask for that specifc use case or a larger problem that includes that use case, you dont even need AI. You just plug in the already solved solution. Now your code base becomes basically just AI filling out wvery possibly question on stack overflow.

        Obviously this isnt actual coding with AI, at the end of the day youre still doing all the heavy lifting. Its effectively no different from how most coders code today, just steal code from stack overflow XD the only difference would be that stack overflow is basically filled with every conceivable question, and if youre isnt answered, you can just request that they set up a new set of ad ais to solve the new problem.

        Secondarily, you are the first person to give me a solid reason as to why the current pardigm is unworkable. Despite my mediocre recall i have spent most of my life studying AI well before all this llm stuff, so i like to think i was at least well educated on the topic at one point. I appreciate your response. I am somewhat curious about what architecture changes need to be made to allow for actual problem solving. The entire point of a nerual network is to replicate the way we think, so why do current AIs only seem to be good at pattern recognition and not even the most basic of problem solving? Perhaps the architecture is fine, but we simply need to train up generational ais that specifically focus on problem solving instead of pattern recog?

        • pinball_wizard@lemmy.zip
          link
          fedilink
          arrow-up
          2
          ·
          3 hours ago

          Perhaps the architecture is fine, but we simply need to train up generational ais that specifically focus on problem solving instead of pattern recog?

          I mean, the architecture clearly isn’t fine. We’re getting very clever results, but we are not seeing even basic reasoning.

          It is entirely possible that AGI can be achieved within our lifetime. But there is substantial evidence that our current approach is a complete and total dead end.

          Not to say that we won’t use pieces of today’s solution. Of course we will. But something unknown but also really important and necessary for AGI appears to be completely missing right now.

    • mech@feddit.org
      link
      fedilink
      arrow-up
      22
      arrow-down
      2
      ·
      1 day ago

      ten years from now(assuming it doesnt crash our economy in more ways than one) i would imagine the software will at least be comparable to an entry level programmer.

      That’s a big assumption. Also, AI stopped progressing in the past year, which shows it’s already hit its ceiling.

      • undeffeined@lemmy.ml
        link
        fedilink
        arrow-up
        17
        ·
        24 hours ago

        Like the user above said, virtually all human code has been fed to the models, now there’s very little new original human code been put online and with more adoption of LLMs in coding will only exacerbate this.

      • Scubus@sh.itjust.works
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        18 hours ago

        It is a massive assumption, but i tend to operate on the belief that capitalism will fall soon and some decent system will appear, because otherwise all this is a moot point anyways. Without the destruction of capitalism well all be dead within 20 years, so i dont find it as entertaining to speculate about a future where we are still bound by things like our economy. As far as the progression of ai this last year, didnt claude just learn to lie? Thats a huge step in development, not because its good, but because it shows our models are progressing. They know what were aiming for and theyre willing to lie to get to that target, which does actually show the most basic level of problem solving. Thats… huge actually.

      • Scubus@sh.itjust.works
        link
        fedilink
        arrow-up
        4
        arrow-down
        3
        ·
        18 hours ago

        Downvotes mean nothing other than stifle conversations on lemmy anyways, and fortunately i woke up to find that a bunch fo people were willing to have pretty solid discussions on the matter. All in all, it definitely seems people here are more willing to doscuss the problems than somewhere like c/fuckai

        I learned something and i like to think i got people thinking, so id say it was worthwhile :)