image description (contains clarifications on background elements)

Lots of different seemingly random images in the background, including some fries, mr. crabs, a girl in overalls hugging a stuffed tiger, a mark zuckerberg ā€œbig brother is watchingā€ poser, two images of fluttershy (a pony from my little pony) one of them reading ā€œu only kno my swag, not my loreā€, a picture of parkzer parkzer from the streamer ā€œdougdougā€ and a slider gameplay element from the rhythm game ā€œosuā€. The background is made light so that the text can be easily read. The text reads:

i wanna know if we are on the same page about ai.
if u diagree with any of this or want to add something,
please leave a comment!
smol info:
- LM = Language Model (ChatGPT, Llama, Gemini, Mistral, ...)
- VLM = Vision Language Model (Qwen VL, GPT4o mini, Claude 3.5, ...)
- larger model = more expensivev to train and run
smol info end
- training processes on current AI systems is often
clearly unethical and very bad for the environment :(
- companies are really bad at selling AI to us and
giving them a good purpose for average-joe-usage
- medical ai (e.g. protein folding) is almost only positive
- ai for disabled people is also almost only postive
- the idea of some AI machine taking our jobs is scary
- "AI agents" are scary. large companies are training
them specifically to replace human workers
- LMs > image generation and music generation
- using small LMs for repetitive, boring tasks like
classification feels okay
- using the largest, most environmentally taxing models
for everything is bad. Using a mixture of smaller models
can often be enough
- people with bad intentions using AI systems results
in bad outcome
- ai companies train their models however they see fit.
if an LM "disagrees" with you, that's the trainings fault
- running LMs locally feels more okay, since they need
less energy and you can control their behaviour
I personally think more positively about LMs, but almost
only negatively about image and audio models.
Are we on the same page? Or am I an evil AI tech sis?

IMAGE DESCRIPTION END


i hope this doesnā€™t cause too much hate. i just wanna know what u people and creatures think <3

  • AdrianTheFrog@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    Ā·
    2 months ago

    I used to think image generation was cool back when it was still in the ā€œgenerating 64x64 pictures of catsā€ stage. I still think itā€™s really cool, but I do struggle to see it being a net positive for society. So far it has seemed to replace the use of royalty free stock images from google more than it has replaced actual artists, but this could definitely change in the future.

    There are some nicer applications of image generation too, like dlss upscaling or frame generation, but I canā€™t think of all that much else honestly.

  • Hildegarde@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    1
    Ā·
    2 months ago

    There are so many different things that are called AI, the term AI doesnā€™t have any meaning whatsoever. Generally it seems to mean anything that includes machine learning somewhere in the process, but itā€™s largely a marketing term.

    Stealing art is wrong. Using ridiculous amounts of power to generate text is ridiculous. Building a text model that will very confidently produce misinformation is pretty dumb.

    There are things that are called AI that are fine, but most arenā€™t.

  • Lvxferre [he/him]@mander.xyz
    link
    fedilink
    arrow-up
    1
    Ā·
    2 months ago

    A lot of those points boil down to the same thing: ā€œwhat if the AI is wrong?ā€

    If itā€™s something that youā€™ll need to check manually anyway, or where a mistake is not a big deal, thatā€™s probably fine. But if itā€™s something where a mistake can affect someoneā€™s well-being, that is bad.

    Reusing an example from the pic:

    • Predicting 3D structures of proteins, as in the example? OK! Worst hypothesis the researchers will notice that the predicted structure does not match the real one.
    • Predicting if you have some medical problem? Not OK. A false negative can cost a life.

    Thatā€™s of course for the usage. The creation of those systems is another can of worms, and it involves other ethical concerns.

    • Smorty [she/her]@lemmy.blahaj.zoneOP
      link
      fedilink
      English
      arrow-up
      1
      Ā·
      2 months ago

      of course using ai stuffs for medical usage is going to have to be monitored by a human with some knowledge. we canā€™t just let it make all the decisionsā€¦ quite yet.

      in many cases, ai models are already better than expert humans in the field. recognizing cancer being the obvious example, where the pattern recognition works perfectly. or with protein folding, where humans are at about 60% accuracy, while googles alphafold is at 94% or so.

      clearly humans need to oversee AIs output, but we are getting to a point where maybe humans make the wrong decision, and deny an AIs correct generation. so: no additional lives are lost, but many more could be saved

  • JayDee@lemmy.sdf.org
    link
    fedilink
    arrow-up
    1
    Ā·
    2 months ago

    I think we should avoid simplifying it to VLMs, LMs, Medical AI and AI for disabled people.

    For instance, most automatic text capture ais (optical Character Recognition, or OCR) are powered by the same machine learning algorithms. Many of the finer-capability robot systems also utilize machine learning (Boston Dynamics utilizes machine learning for instance). Thereā€™s also the ability to ID objects within footage, as well as spot faces and referencing it with a large database in order to find the person with said face.

    All these are Machine Learning AI systems.

    I think it would also be prudent to cease using the term ā€˜AIā€™ when what we actually are discussing is machine learning, which is a much finer subset. Simply saying ā€˜AIā€™ diminishes the termā€™s actual broader meaning and removes the deeper nuance the conversation deserves.

    Here are some terms to use instead

    • Machine Learning = AI systems which increase their capability through automated iterative refinement.
    • Evolutionary Learning = a type of machine learning where many instances of randomly changed AI models (called a ā€˜generationā€™) are run simultaneously, and the most effective is/are used as a baseline for the next ā€˜generationā€™
    • Neural Network = a type of machine learning system which utilizes very simple nodes called ā€˜neuronsā€™ for processing. These are often used for image processing, LMs, and OCR.
    • Convolution Neural Network (CNN) = a Neural network which has an architecture of neuron ā€˜flitersā€™ layered over each other for powerful data processing capabilities.

    This is not exhaustive but hopefully will help in talking about this topic in a more definite and nuanced fashion. Here is also a document related the different types of neural networks

  • megopie@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    1
    Ā·
    edit-2
    2 months ago

    There is an over arching issue with most of the extant models being highly unethical in where they got their data, effectively having made plagiarism machines.

    It is not ok to steal the content of millions of small independent creators to create slop that drowns them out. Most of them were already offering their work for free. And I am talking about LMs here, writing is a skill.

    Say what ever you want about big companies being bad for abusing IP laws, but this is not about the laws, not even paying people for their work, this is about crediting people when they do work, acknowledging that the work they did had value, and letting people know where they can find more.

    Also, I donā€™t really buy the ā€œitā€™s good for disabled peopleā€ that feels like using disabled people as a shield against criticism, and Iā€™ve yet to see it brought up in good faith.

  • arisunz@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    1
    Ā·
    edit-2
    2 months ago

    I wish people stopped treating these fucking things as a knowledge source, let alone a reliable one. By definition they cannot distinguish facts, only spit out statistically correct-sounding text.

    Are they of help to your particular task? Cool, hope the model youā€™re using hasnā€™t been trained on stolen art, or doesnā€™t rely on traumatizing workers on the global south (who are paid pennies btw) to function.

    Also, yā€™know, donā€™t throw gasoline to an already burning planet if possible. You might think you need to use a GPT for a particular task or funny meme, but chances are you actually donā€™t.

    Thatā€™s about it for me I think.

    edit: when i say ā€œyouā€ in this post i donā€™t mean actually you OP, i mean in general. sorry if this seems rambly im sleep deprived as fuckj woooooo

  • BlueLineBae@midwest.social
    link
    fedilink
    English
    arrow-up
    1
    Ā·
    2 months ago

    In my experience, the best uses have been less fact-based and more ā€œenhancementā€ based. For example, if I write an email and I just feel like Iā€™m not hitting the right tone, I can ask it to ā€œrewrite this email with a more inviting toneā€ and it will do a pretty good job. I might have to tweak it, but it worked. Same goes for image generation. If I already know what I want to make, I can have it output the different elements I need in the appropriate style and piece them together myself. Or I can take a photograph that I took and use it to make small edits that are typically very time consuming. I donā€™t think itā€™s very good or ethical for having it completely make stuff up that you will use 1:1. It should be a tool to aid you, not a tool to do things for you completely.

  • flamingos-cant@feddit.uk
    link
    fedilink
    English
    arrow-up
    1
    Ā·
    2 months ago

    What does ā€œAI for disabled peopleā€ entail? A lot of ā€˜good AIā€™ things I see are things I wouldnā€™t consider AI, e.g. VLCā€™s local subtitle generation.

  • glitchdx@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    Ā·
    2 months ago

    Iā€™ll just repeat what Iā€™ve said before, since this seems like a good spot for this conversation.

    Iā€™m an idiot with no marketable skills. I want to write, I want to draw, I want to do a lot of things, but Iā€™m bad at all of them. gpt like ai sounds like a good way for someone like me to get my vision out of my brain and into the real world.

    My current project is a wiki of lore for a fictional setting, for a series of books that I will never actually write. My ideal workflow involves me explaining a subject as best I can to the ai (an alien technology or a kingdomā€™s political landscape, or drama between gods, or whatever), telling the ai to ask me questions about the subject at hand to make me write more stuff, repeat a few times, then have the ai summarize the conversation back to me. I can then refer to that summary as I write an article on the subject. Or, me being lazy, I can just copy-pasta the summary and thatā€™s the article.

    As an aside, I really like chatgpt 4o for lore exploration, but Iā€™d prefer to run an ai on my own hardware. Sadly, I do not understand github and my brain glazes over every time I look at that damn site.

    It is way too easy for me to just let the ai do the work for me. Iā€™ve noticed that when I try to write something without ai help, itā€™s worse now than it was a few years ago. generative ai is a useful tool, but it should be part of a larger workflow, it should not be the entire workflow.

    If I was wealthy, I could just hire or commission some artists and writers to do the things. From my point of view, itā€™s the same as having the ai do the things, except itā€™s slower and real humans benefit from it. Iā€™m not wealthy though, hell, I struggle to pay rent.

    The technology is great, the business surrounding it is horrible. Iā€™m not sure what my point is.

    • Cassa@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      1
      Ā·
      2 months ago

      Iā€™m sorry, but did you ever think of the option to try? To write a story you have to work on it and get better.

      GPT or llms canā€™t write a story for you, and if you somehow wrangle it to write a story without losing itā€™s thread - then is it even your story?

      look, itā€™s not going to be a good story if you donā€™t write it yourself. Thereā€™s a reason for why companies want to push it, they donā€™t want writers.

      Iā€™m sure you can write something, but that you have issues which you need to deal with before you can delve into this. Iā€™m not saying itā€™s easy, but itā€™s worth it.

      Also read books. Read books to become a better writer.

      PPS. If you make an llm write it youā€™ll come across issues copyrighting it, at least last I heard.

    • Smorty [she/her]@lemmy.blahaj.zoneOP
      link
      fedilink
      English
      arrow-up
      1
      Ā·
      2 months ago

      i don think i understand your commentā€¦

      or maybe thatā€™s the point?

      or maybe ur making a funi joke about u being an AI assistant?
      If so:
      haha lol that's so hilarious<|endoftext|><|endoftext|><|endoftext|><|endoftext|><|fim_prefix|>func get_length(vec1:Vector2) -> float:\n<|fim_suffix|> return length\n\nyea i like LMs kinda a smol bit and like experimenting with em a lot, cuz it's kinda fun to test their capabilities and such

      if not: pls explain <3

      • murmurations@lemmy.sdf.org
        link
        fedilink
        arrow-up
        1
        Ā·
        2 months ago

        if not: pls explain <3

        response output --verbose:
        Line 1: Smorty!!!
        Explanation: You brighten my day every time I see you doing your thing. Line 1 expresses this joy.
        Line 2: Thank you for this conversation
        Explanation: I am glad to see peoplesā€™ replies to your post. Line 2 thanks you for starting this discussion.

        • Smorty [she/her]@lemmy.blahaj.zoneOP
          link
          fedilink
          English
          arrow-up
          1
          Ā·
          2 months ago

          really??? i didnā€™t kno i make u comf when i post a thing!! ~ iā€™m very happi about that!!! <3

          also, iā€™m surprsied that u still like the fact that i made this convo spring up. many peeps are very one-sided about this, and i recognize that i am more pro-ai than con-ai. i wanted to hear peepsā€™s thoughts about it, so i jus infodump in a image with fluttershy in it, and now we are here!

          i would think that u wouldnā€™t like this kind of very adult topic about ai stuffs but apparenty u are oki with me asking very serious things on hereā€¦

          i hope u have a comf day and that u sleep well and that u eat something nice!!! <3

  • I Cast Fist@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    Ā·
    2 months ago

    Honest question, how does AI help disabled people, or which kinds of disabilities?

    One of the few good uses I see for audio AI is translation using the voice of the original person (though thatā€™d deal a significant blow to dubbing studios)

    • Smorty [she/her]@lemmy.blahaj.zoneOP
      link
      fedilink
      English
      arrow-up
      1
      Ā·
      2 months ago

      fair question. i didnā€™t think that much about what i meant by that, but hereā€™s the obvious examples

      • image captioning using VLMs, including detailed multi-turn question answering
      • video subtitles, already present in youtube and VLC apparently

      i really should have thought more about that point.

  • Staden_ ć‚¹ć‚æćƒ‡ćƒ³@pawb.social
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    Ā·
    2 months ago

    I donā€™t see how AI is inherently bad for the environment. I know they use a lot of energy, but if the energy comes from renewable sources, like solar or hydroelectric, then it shouldnā€™t be a problem, right?

    • Zangoose@lemmy.world
      link
      fedilink
      arrow-up
      1
      Ā·
      2 months ago

      The problem is that we only have a finite amount of energy. If all of our clean energy output is going toward AI then yeah itā€™s clean but it means we have to use other less clean sources of energy for things that are objectively more important than AI - powering homes, food production, hospitals, etc.

      Even ā€œcleanā€ energy still has downsides to the environment also like noise pollution (impacts local wildlife), taking up large amounts of space (deforestation), using up large amounts of water for cooling, or having emissions that arenā€™t greenhouse gases, etc. Ultimately weā€™re still using unfathomably large amounts of energy to train and use a corporate chatbot trained on all our personal data, and that energy use still has consequences even if itā€™s ā€œcleanā€

    • Smorty [she/her]@lemmy.blahaj.zoneOP
      link
      fedilink
      English
      arrow-up
      0
      Ā·
      2 months ago

      i kinda agree. currently many places still use oil for engery generation, so that kinda makes sense.

      but if powered by cool solar panels and cool wind turbine things, that would be way better. then it would only be down to the production of GPUs and the housing.