• The VOIDRUNNER'S CODEX is coming! Explore new worlds, fight oppressive empires, fend off fearsome aliens, and wield deadly psionics with this comprehensive boxed set expansion for 5E and A5E!

AI is stealing writers’ words and jobs…

so I was looking at civitai and saw this as part of one model's description "This new model was fine-tuned using a vast collection of public domain images"

I don't think civitai would be the best place to look for, as their models are finetunes or merge of existing models, which are accused of being unethically trained. Therefore any model being built on top of that would generate the same reaction, I guess, depending on one view of what is ethical. Microsoft does claim its AI to be ethical, and who knows how Dall-E's dataset was obtained, but OpenAI is sued for its LLM model, so maybe they are not as ethical as they claim. Firefly is undoubtedy clean (because Adobe acquired the rights on the images they used), but, since training a base model from scratch is extremely expensive, there will be few effort to make a state of the art model out of only public-domain data unless there is a clear determination that it is illegal to do "business as utsual".

Edit: I forgot about pixart-alpha model being trained on CC-0 dataset: https://mpost.io/akash-networks-mainnet-8-upgrade-boosts-visibility-for-cloud-gpu-operations/

But it's more for demonstration of their traing method (which is, they say, very cost-effective, reducing the need in computation power to around 30k$)than to provide state of the art generation.
 
Last edited:

log in or register to remove this ad


so it seems some redditors have tried it and well

As with a lot of AI people, they're probably a bit premature in that:

scrnli_1_26_2024_6-44-52 PM.png


Seems like the limited dataset makes actually knowing difficult to figure out how bad it can be, especially at this juncture.
 

As with a lot of AI people, they're probably a bit premature in that:

View attachment 344597

Seems like the limited dataset makes actually knowing difficult to figure out how bad it can be, especially at this juncture.
Either way It won't be fully proven until someone trains a base model with enough, millions, of nightshade images from my understanding.

EDIT: I should point out the pervious big thing Glaze basically failed rather quickly
 
Last edited:

Either way It won't be fully proven until someone trains a base model with enough, millions, of nightshade images from my understanding.

I mean, we'll see how many it needs. It definitely seems more effective than those posts indicate, so I would not be as confident as they are that it's "dead" given that it's only just hit the scene.
 





Scribe

Legend
Is it the tool's fault or the person using the tool or both?

The tool is still fundamentally the problem.

Step 1. Create a narrative, a setting, where anyone can claim anything is true, and someone, somewhere will both accept it, and support you.
Step 2. Flood people with enough conflicting narratives that people are overwhelmed.
Step 3. Create tools that allow for the generation of even more content.
Step 4. Keep control of these tools and algorithms in the hands of a small group.
Step 5. Replace humans, and broadcast what you want people to think is real.

 

Remove ads

Top