image description (contains clarifications on background elements)
Lots of different seemingly random images in the background, including some fries, mr. crabs, a girl in overalls hugging a stuffed tiger, a mark zuckerberg ābig brother is watchingā poser, two images of fluttershy (a pony from my little pony) one of them reading āu only kno my swag, not my loreā, a picture of parkzer parkzer from the streamer ādougdougā and a slider gameplay element from the rhythm game āosuā. The background is made light so that the text can be easily read. The text reads:
i wanna know if we are on the same page about ai.
if u diagree with any of this or want to add something,
please leave a comment!
smol info:
- LM = Language Model (ChatGPT, Llama, Gemini, Mistral, ...)
- VLM = Vision Language Model (Qwen VL, GPT4o mini, Claude 3.5, ...)
- larger model = more expensivev to train and run
smol info end
- training processes on current AI systems is often
clearly unethical and very bad for the environment :(
- companies are really bad at selling AI to us and
giving them a good purpose for average-joe-usage
- medical ai (e.g. protein folding) is almost only positive
- ai for disabled people is also almost only postive
- the idea of some AI machine taking our jobs is scary
- "AI agents" are scary. large companies are training
them specifically to replace human workers
- LMs > image generation and music generation
- using small LMs for repetitive, boring tasks like
classification feels okay
- using the largest, most environmentally taxing models
for everything is bad. Using a mixture of smaller models
can often be enough
- people with bad intentions using AI systems results
in bad outcome
- ai companies train their models however they see fit.
if an LM "disagrees" with you, that's the trainings fault
- running LMs locally feels more okay, since they need
less energy and you can control their behaviour
I personally think more positively about LMs, but almost
only negatively about image and audio models.
Are we on the same page? Or am I an evil AI tech sis?
IMAGE DESCRIPTION END
i hope this doesnāt cause too much hate. i just wanna know what u people and creatures think <3
Mr crabs would use unethical llms, very accurate
true, he would totally replace his workers with robots, and then complain about hallucinated recipes.
I used to think image generation was cool back when it was still in the āgenerating 64x64 pictures of catsā stage. I still think itās really cool, but I do struggle to see it being a net positive for society. So far it has seemed to replace the use of royalty free stock images from google more than it has replaced actual artists, but this could definitely change in the future.
There are some nicer applications of image generation too, like dlss upscaling or frame generation, but I canāt think of all that much else honestly.
There are so many different things that are called AI, the term AI doesnāt have any meaning whatsoever. Generally it seems to mean anything that includes machine learning somewhere in the process, but itās largely a marketing term.
Stealing art is wrong. Using ridiculous amounts of power to generate text is ridiculous. Building a text model that will very confidently produce misinformation is pretty dumb.
There are things that are called AI that are fine, but most arenāt.
This list is missing: AI generated images are not art.
i also think that way, but itās also true that generated images are being used all over the web already, so people generally donāt seem to care.
I disagree, but I can respect your opinion.
A lot of those points boil down to the same thing: āwhat if the AI is wrong?ā
If itās something that youāll need to check manually anyway, or where a mistake is not a big deal, thatās probably fine. But if itās something where a mistake can affect someoneās well-being, that is bad.
Reusing an example from the pic:
- Predicting 3D structures of proteins, as in the example? OK! Worst hypothesis the researchers will notice that the predicted structure does not match the real one.
- Predicting if you have some medical problem? Not OK. A false negative can cost a life.
Thatās of course for the usage. The creation of those systems is another can of worms, and it involves other ethical concerns.
of course using ai stuffs for medical usage is going to have to be monitored by a human with some knowledge. we canāt just let it make all the decisionsā¦ quite yet.
in many cases, ai models are already better than expert humans in the field. recognizing cancer being the obvious example, where the pattern recognition works perfectly. or with protein folding, where humans are at about 60% accuracy, while googles alphafold is at 94% or so.
clearly humans need to oversee AIs output, but we are getting to a point where maybe humans make the wrong decision, and deny an AIs correct generation. so: no additional lives are lost, but many more could be saved
I think we should avoid simplifying it to VLMs, LMs, Medical AI and AI for disabled people.
For instance, most automatic text capture ais (optical Character Recognition, or OCR) are powered by the same machine learning algorithms. Many of the finer-capability robot systems also utilize machine learning (Boston Dynamics utilizes machine learning for instance). Thereās also the ability to ID objects within footage, as well as spot faces and referencing it with a large database in order to find the person with said face.
All these are Machine Learning AI systems.
I think it would also be prudent to cease using the term āAIā when what we actually are discussing is machine learning, which is a much finer subset. Simply saying āAIā diminishes the termās actual broader meaning and removes the deeper nuance the conversation deserves.
Here are some terms to use instead
- Machine Learning = AI systems which increase their capability through automated iterative refinement.
- Evolutionary Learning = a type of machine learning where many instances of randomly changed AI models (called a āgenerationā) are run simultaneously, and the most effective is/are used as a baseline for the next āgenerationā
- Neural Network = a type of machine learning system which utilizes very simple nodes called āneuronsā for processing. These are often used for image processing, LMs, and OCR.
- Convolution Neural Network (CNN) = a Neural network which has an architecture of neuron āflitersā layered over each other for powerful data processing capabilities.
This is not exhaustive but hopefully will help in talking about this topic in a more definite and nuanced fashion. Here is also a document related the different types of neural networks
There is an over arching issue with most of the extant models being highly unethical in where they got their data, effectively having made plagiarism machines.
It is not ok to steal the content of millions of small independent creators to create slop that drowns them out. Most of them were already offering their work for free. And I am talking about LMs here, writing is a skill.
Say what ever you want about big companies being bad for abusing IP laws, but this is not about the laws, not even paying people for their work, this is about crediting people when they do work, acknowledging that the work they did had value, and letting people know where they can find more.
Also, I donāt really buy the āitās good for disabled peopleā that feels like using disabled people as a shield against criticism, and Iāve yet to see it brought up in good faith.
I think generative AI is mainly a tool of deception and tyranny. The use cases for fraud, dehumanization and oppression are plentiful. I think Iris Meredith does a good job of highlighting the threat at hand. I donāt really care about the tech in theory: what matters right now is who builds it and how it is being deployed onto the world.
I wish people stopped treating these fucking things as a knowledge source, let alone a reliable one. By definition they cannot distinguish facts, only spit out statistically correct-sounding text.
Are they of help to your particular task? Cool, hope the model youāre using hasnāt been trained on stolen art, or doesnāt rely on traumatizing workers on the global south (who are paid pennies btw) to function.
Also, yāknow, donāt throw gasoline to an already burning planet if possible. You might think you need to use a GPT for a particular task or funny meme, but chances are you actually donāt.
Thatās about it for me I think.
edit: when i say āyouā in this post i donāt mean actually you OP, i mean in general. sorry if this seems rambly im sleep deprived as fuckj woooooo
In my experience, the best uses have been less fact-based and more āenhancementā based. For example, if I write an email and I just feel like Iām not hitting the right tone, I can ask it to ārewrite this email with a more inviting toneā and it will do a pretty good job. I might have to tweak it, but it worked. Same goes for image generation. If I already know what I want to make, I can have it output the different elements I need in the appropriate style and piece them together myself. Or I can take a photograph that I took and use it to make small edits that are typically very time consuming. I donāt think itās very good or ethical for having it completely make stuff up that you will use 1:1. It should be a tool to aid you, not a tool to do things for you completely.
What does āAI for disabled peopleā entail? A lot of āgood AIā things I see are things I wouldnāt consider AI, e.g. VLCās local subtitle generation.
Iāll just repeat what Iāve said before, since this seems like a good spot for this conversation.
Iām an idiot with no marketable skills. I want to write, I want to draw, I want to do a lot of things, but Iām bad at all of them. gpt like ai sounds like a good way for someone like me to get my vision out of my brain and into the real world.
My current project is a wiki of lore for a fictional setting, for a series of books that I will never actually write. My ideal workflow involves me explaining a subject as best I can to the ai (an alien technology or a kingdomās political landscape, or drama between gods, or whatever), telling the ai to ask me questions about the subject at hand to make me write more stuff, repeat a few times, then have the ai summarize the conversation back to me. I can then refer to that summary as I write an article on the subject. Or, me being lazy, I can just copy-pasta the summary and thatās the article.
As an aside, I really like chatgpt 4o for lore exploration, but Iād prefer to run an ai on my own hardware. Sadly, I do not understand github and my brain glazes over every time I look at that damn site.
It is way too easy for me to just let the ai do the work for me. Iāve noticed that when I try to write something without ai help, itās worse now than it was a few years ago. generative ai is a useful tool, but it should be part of a larger workflow, it should not be the entire workflow.
If I was wealthy, I could just hire or commission some artists and writers to do the things. From my point of view, itās the same as having the ai do the things, except itās slower and real humans benefit from it. Iām not wealthy though, hell, I struggle to pay rent.
The technology is great, the business surrounding it is horrible. Iām not sure what my point is.
Iām sorry, but did you ever think of the option to try? To write a story you have to work on it and get better.
GPT or llms canāt write a story for you, and if you somehow wrangle it to write a story without losing itās thread - then is it even your story?
look, itās not going to be a good story if you donāt write it yourself. Thereās a reason for why companies want to push it, they donāt want writers.
Iām sure you can write something, but that you have issues which you need to deal with before you can delve into this. Iām not saying itās easy, but itās worth it.
Also read books. Read books to become a better writer.
PPS. If you make an llm write it youāll come across issues copyrighting it, at least last I heard.
Smorty!!!
Thank you for this conversationi don think i understand your commentā¦
or maybe thatās the point?
or maybe ur making a funi joke about u being an AI assistant?
If so:
haha lol that's so hilarious<|endoftext|><|endoftext|><|endoftext|><|endoftext|><|fim_prefix|>func get_length(vec1:Vector2) -> float:\n<|fim_suffix|> return length\n\nyea i like LMs kinda a smol bit and like experimenting with em a lot, cuz it's kinda fun to test their capabilities and such
if not: pls explain <3
if not: pls explain <3
response output --verbose:
Line 1:Smorty!!!
Explanation: You brighten my day every time I see you doing your thing. Line 1 expresses this joy.
Line 2:Thank you for this conversation
Explanation: I am glad to see peoplesā replies to your post. Line 2 thanks you for starting this discussion.really??? i didnāt kno i make u comf when i post a thing!! ~ iām very happi about that!!! <3
also, iām surprsied that u still like the fact that i made this convo spring up. many peeps are very one-sided about this, and i recognize that i am more pro-ai than con-ai. i wanted to hear peepsās thoughts about it, so i jus infodump in a image with fluttershy in it, and now we are here!
i would think that u wouldnāt like this kind of very adult topic about ai stuffs but apparenty u are oki with me asking very serious things on hereā¦
i hope u have a comf day and that u sleep well and that u eat something nice!!! <3
Honest question, how does AI help disabled people, or which kinds of disabilities?
One of the few good uses I see for audio AI is translation using the voice of the original person (though thatād deal a significant blow to dubbing studios)
fair question. i didnāt think that much about what i meant by that, but hereās the obvious examples
- image captioning using VLMs, including detailed multi-turn question answering
- video subtitles, already present in youtube and VLC apparently
i really should have thought more about that point.
I donāt see how AI is inherently bad for the environment. I know they use a lot of energy, but if the energy comes from renewable sources, like solar or hydroelectric, then it shouldnāt be a problem, right?
The problem is that we only have a finite amount of energy. If all of our clean energy output is going toward AI then yeah itās clean but it means we have to use other less clean sources of energy for things that are objectively more important than AI - powering homes, food production, hospitals, etc.
Even ācleanā energy still has downsides to the environment also like noise pollution (impacts local wildlife), taking up large amounts of space (deforestation), using up large amounts of water for cooling, or having emissions that arenāt greenhouse gases, etc. Ultimately weāre still using unfathomably large amounts of energy to train and use a corporate chatbot trained on all our personal data, and that energy use still has consequences even if itās ācleanā
i kinda agree. currently many places still use oil for engery generation, so that kinda makes sense.
but if powered by cool solar panels and cool wind turbine things, that would be way better. then it would only be down to the production of GPUs and the housing.