Surprised pikachu face
So what exactly is open about their ai
Open(your fucking wallet)AI
It’s criminal they’re keeping the name OpenAI
They put open in their name to get good talent, investments and so people would have a soft spot for them when they collect tons of data to build their product.
Their internal chats that were released in musk lawsuit reveals they knew they were gonna switch to for profit model (they here means the top brass). But they still lied to everybody about their intentions.
It’s called OpenAI because they are open to stealing content to train their AI
“I made this!”
You made this?
…I made this!
I just used ChatGPT to copy this, so I’m sorry fellas, but actually now I made this.
Except in the AI version it has the wrong number of fingers, and the text is spelled wrong.
I͖ͭ̍̀̏͂̋̏͛ ͕͚̱̗̭͗͗͑ͥͨ̆ͥ͊ͅm̤̻͕̪̥͓͍̿̑̚a̜̝͖̯̰̦͐ͭ̄͋c̠̹̱̱̖ͦ̿̋͗̎l̝̭͚͇̎ͧe͙͕͂͆ ̳̩̦͙̯̮̙t̘̯̯ͣͧ̍ȉ̩̜́̏̂ͯ̉͑̄s̪͖̎ͬ̐͆
Can’t argue with objective truth
shit, bro. Deep
“Don’t tell me what to do, bro!”
… They weren’t before?
Yeah, they weren’t as synergized. Now they’re coordinating with key stakeholders to maximize the efficiency of their aggressive roadmap. Or something, I kinda suck at business jargon.
Wild for a company that’s never made a profit
These companies do not make profit in paper but have already made millions for others.
It’s all smoke and mirrors
Oh it’s made plenty for Nvidia.
As Ed said, Sam Altman has been a plague.
As long as the shareholders are happy.
How is this going to work while OpenAI currently burns through an absolute ocean of cash to keep improving its services? Alongside this, a good software engineer or applied scientist can make close to $1m a year. While I do think professionals should earn what their value is to an employer, OpenAI still loses a ton of money.
As someone that works in AI, I think most of us know it’s full of people trying to make a quick buck while investors will stupidly throw money at it. OpenAI is ultimately the figurehead of this market though, because at least the big companies can prop their AI offerings with the money they make from shopping, cloud, ads, etc. The second OpenAI looks weak and needs money, the vultures will slice off a piece and we’ll see the AI market reduce to a wimper - just enough for tech to focus on the next grift.
About the only AI company currently alive that I’m sure will survive is CivitAI. Huggingface probably, too. Both are, in the end, in the datacenter business. Huggingface has exposure to VC BS in their client base, they might be in trouble if a significant number suddenly go belly-up but if they have any sense they’ll simply not overextend. And, well, they, too, can switch to cat pictures.
Yeah some of my team members use hf and it really does represent a convenience (basically a GitHub for models), but I’m sure to be clear we can’t rely on them alone. I don’t trust any company to exist or not be bought out and enshittified in 3 years.
Step 1. Make an AI that hoovers up content.
Step 2. When owners of content complain about privacy violations and copyright infringement, allay their fears. This AI is for the Good of Humanity.
Step 3. ???
Step 4. Profit.
Yet another example of doing crime at a big enough scale that you get rewarded for it. That’s what this country was built on.
Surely they will be sued into oblivion if they tried right? Them being non profit was the main pillar holding up their defense for scraping the web into datasets.
This is what Ilya saw…
Just want to point out that it absolutely is possible to train an AI that will keep track of its sources for inspiration and can attribute those when it makes a response.
Meaning creators could be compensated for their parts of AI generated stuff, if anyone wanted to.
I think that there are some people working on this, and a few groups that have claimed to do it, but I’m not aware of any that actually meet the description you gave. Can you cite a paper or give a link of some sort?
Other than citing the entire training data set, how would this be possible?
The entire training set isn’t used in each permutation. Your keywords are building the samples based on metadata tags tied back to the original images.
If you ask for “Iron Man in a cowboy hat”, the toolset will reach for some catalog of Iron Man images and some catalog of cowboy hat images and some catalog of person-in-cowboy-hat images, when looking for a basis of comparison as it renders the image.
These would be the images attributed to the output.
Do you have a source for this? This sounds like fine-tuning a model, which doesn’t prevent data from the original training set from influencing the output. The method you described would only work if the AI is trained from scratch on only images of iron man and cowboy hats. And I don’t think that’s how any of these models work.
Doesn’t Phind do this already? I haven’t used it much but I remember it showing its sources for answers of code-related stuff
I use Phind solving computer problems. It does cite the sources it uses. At least for distro and general Linux issues. So far, it’s been a very good resource when I’ve needed it.
The article could be from 2022 and I’d be as unsurprised as I am now.
reminder, there are localy ran LLMs. Right now is a vital time for open source to fight against closed source in the AI arms race.
Another good resource to help people find models https://llm.extractum.io
Or just straight up install https://ollama.com
I like Ollama, and recommend it to tinker, but I admit this “LLM Explorer” is quite neat thanks to sections like “LLMs Fit 16GB VRAM”
Ollama just works but it doesn’t help to pick which model best fits your needs.
pick which model best fits your needs.
What is the need I have to put the effort in to install all this locally. Websites win in terms of convenience.
I don’t think I understand your point, are you saying there is no benefit in running locally and that Websites or APIs are more convenient?
I already have stable diffusion on a local machine. I was trying to find motivation to install a LLM locally. You answered my question in a different response
use cases where customization helps while quality does matter much due to scale, i.e spam, then LLMs and related tools are amazing.
I want to work on my stuff in peace and in private without worrying about a company grabbing my stuff and using it for themselves and to give/sell it to other outfits, including the government. “If you have nothing to hide…” is bullshit and needs to die.
Good point. Everything you feed into chatgpt is stored for future reference.
At the same time, the trouble with local LLMs is that they’re very resource heavy. Your average household computer isn’t going to be able to run one with much usability or speed.
it’s a lot slower that chatgpt but on my integrated graphics i7 laptop it ran decent, def enough to be useable. Also there’s different models to play around with, some are faster but worse and some are smarter but slower
Which, you know, is fine. Maybe if people had an idea of how much power is required to run them, they would think twice before using a gigawatt to output a poem about farts, and perhaps even wonder how OpenAI can offer that for free. Btw, a 7b model should run ok on any PC with at least 16GB of RAM and a modern processor/GPU.
Phi 3 can run on pretty low specs (requires 4gb RAM) and has relatively good output
Okay but what problem does that solve? Is the solution setting up our own spambots to fill forums with arguments counter to their bullshit spambots? I don’t see how an LLM improves literally anything ever in any circumstance.
You seem unnecessarily hostile about this. If you don’t like LLM just move on.
This is exactly why this sub about technology is better off without business news. You’re just reacting to something you hate and directing that at others.
But answer the question maybe
Also, my “hate” was very clearly directed towards LLMs and not a “person”.
It definitely improves my experience coding in unfamiliar languages. So there’s your counter example.
improves my experience coding in unfamiliar languages
Alan Perlis said “A programming language that doesn’t change the way you think is not worth learning.”
So… if you code in another language without actually “getting it”, solely having a usable result, what is actually the point of changing languages?
Exactly. I see AI as a tool to automate the boring parts, if you try to automate the hard parts, you’re going to have a bad time.
Take the time to learn the tools you use thoroughly, and then you can turn to AI to make your use of those tools more efficient. If I’m learning woodworking, for example, I’m going to learn to use hand tools first before using power tools, but there’s no way I’m sticking to hand tools when producing a lot of things. Programming isn’t any different, I’ll learn the language and its idioms as deeply as I can, and only then will I turn to things like AI to spit out boilerplate to work from.
Mind explaining a bit your workflow at the moment?
I’m not sure how to succinctly do that.
When I learn a new language, I:
- go through whatever tutorial is provided by the language developers - for Rust, that’s The Rust Programming Language, for Go, it’s Tour of Go and Effective Go
- build something - for Go, this was a website, and for Rust it was a Tauri app (basically a website); it should be substantial enough to exercise the things I would normally do with the language, but not so big that I won’t finish
- read through substantial portions of the standard library - if this is minimal (e.g. in Rust), read through some high profile projects
- repeat 2 & 3 until I feel confident I understand the idioms of the language
I generally avoid setting up editor tooling until I’ve at least run through step 3, because things like code completion can distract from the learning process IMO.
Some books I’ve really enjoyed (i.e. where 1 doesn’t exist):
- The C Programming Language - by Brian Kernighan and Dennis Richie
- Programming in Lua - by Roberto Ierusalimschy
- Learn You a Haskell for Great Good - by Miran Lipovača (available free online)
But regardless of the form it takes, I appreciate a really thorough introduction to the language, followed by some experimentation, and then topped off with some solid, practical code examples. I generally allow myself about 2 weeks before expecting to write anything resembling production code.
These days, I feel confident in a dozen or so programming languages (I really like learning new languages), and I find that thoroughly learning each has made me a better programmer.
Thanks for that, was quite interesting and I agree that completion too early (even… in general) can be distracting.
I did mean about AI though, how you manage to integrate it in your workflow to “automate the boring parts” as I’m curious which parts are “boring” for you and which tools you actual use, and how, to solve the problem. How in particular you are able to estimate if it can be automated with AI, how long it might take, how often you are correct about that bet, how you store and possibly share past attempts to automate, etc.
I have a job to do. And I understand the other language conceptually, I am just rusty on the syntax.
Also the chat feature is invaluable. I can highlight a piece of code and ask what it does, and copilot explains it.
From all the studies available, LLMs increased the rate at which low skilled workers complete tasks. They also lower accuracy, so expect some of the tasks to be done incorrectly.
If your metric for “improves” is being a better low skill drone forever then yes I’m sure it’s helping you. Here is a novel idea, maybe learn the language from a reliable source instead of taking the word of a bullshit generator at face value?
Here’s an idea, maybe start with curiosity about how someone is getting value out of it? It’s possible you don’t know everything about other people’s experiences.
It’s something being shoved down our throats every second of every day and I’ve seen enough to know I don’t like it. Curiosity was satiated a long ass time ago. It’s just a bigger power draw than Cryptocurrency but somehow magically even less value.
FWIW I did try a lot (LLMs, code, generative AI for images, 3D models) in a lot of ways (CLI, Web based, chat bot) both locally and using APIs.
I don’t use any on a daily basis. I find it exciting that we can theoretically do a lot “more” automatically but… so far the results have not been worth the efforts. Sadly some of the best use cases are exactly what you highlighted, i.e low effort engagement for spam. Overall I find that either working with a professional (script writer, 3D modeler, dev, designer, etc) is a lot more rewarding but also more efficient which itself makes it cheaper.
For use cases where customization helps while quality does matter much due to scale, i.e spam, then LLMs and related tools are amazing.
PS: I’d love to hear the opinion of a spammer actually, maybe they also think it’s not that efficient either.
I have personally found generative-text LLMs quite good for creating titles. As an example, I have a few hundred tweets that I’m trying to put into a file, and I’ll use an LLM to create a human-readable name for them. It’s much better than a lot of the other summarisation mechanisms (like BERT) I’ve tried with it, but it’s still not perfect, because the model tends to output the same thing in slightly different words each time, so repeat runs will often result in the same thing with a different title.
But, that is also a fairly limited use case.
My favorite part of this image is the image corruption on the bottom lol. Hopefully that wasn’t a local my side issue or I’m gonna look insane
Somewhere along the way my copy got janked. I liked it so I keep using it.
Ah, the John Cleesachu.
Shocked Cleesachu
Much open, very organic, very demure, so mindful.
Probably has to be renamed to “ClosedAI” then.