• 1 Post
  • 79 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle
  • Time for dropbox users to upload all kinds of crap for ai to “learn” from, all within tos of course.

    I bet there are many kinds of ways to make your files poison the ai learning data. Its going to be fun for those ai guys to sort which files are probably safe and which are not. I think even if ONE user manages to slip something that corrupts the training data and its not noticed soon enough it might cause problems for them. Though someone who actually knows something about the subject might want to tell if i’m talking shit or not.

    I’m not against ai in general, but if its trained with data that was obtained from unwilling people, like this, then its makers can fuck off.


















  • True. Though while its horrible for those people, they might be doing more important work than they or us even realize. I also kind of trust moral judgement of oppressed more than oppressor(since they are the ones who do the work). Though i’m definitely not condoning the exploitation of those people.

    Its quite awful that this seems to be the best we can hope for regarding this. I doubt google or microsoft are going to give very positive guidance whether its ok for people to suffer if it leads to more money for investors when they do their own labeling.


  • This is actually extremely critical work, if results are going to be used by ai’s that are going to be used widely. This essentially determines the “moral compass” of the ai.

    Imagine if some big corporation did the labeling and such, trained some huge ai with that data and it became widely used. Then years pass and eventually ai develops to such extent it can be reliably be used to replace entire upper management. Suddenly becoming slave for “evil” ai overlord is starting to move from being beyond crazy idea to plausible(years and years in future, not now obviously).