I know there are other ways of accomplishing that, but this might be a convenient way of doing it. I’m wondering though if Reddit is still reverting these changes?
I know there are other ways of accomplishing that, but this might be a convenient way of doing it. I’m wondering though if Reddit is still reverting these changes?
If this is true, it shifts the problem from “not having it” to “not knowing which version should be used” (to train the LLM).
They could feed it the unedited versions and call it a day, but a lot of times people edit their content to correct it or add further info, specially for “meatier” content (like tutorials). So there’s still some value on the edits, and I believe that Google will be at least tempted to use them.
If that’s correct, editing it with nonsense will lower the value of edited comments for the sake of LLM training. It should have an impact, just not as big as if they kept no version system.
I know from experience (I’m a former Reddit janny) that moderators can’t see earlier versions of the content, only the last one. The admins might though.
The one from TD, right?
Wouldn’t be hard to scan a user and say:
It sounds like what’s needed here is a version of this tool that makes the edits slowly, at random intervals, over a period of time. And perhaps has the ability to randomize the text in each edit so that they’re all unusable garbage, but different unusable garbage (like the suggestion of taking ChatGPT output at really high temp that someone else made). Maybe it also only edits something like 25% of your total comment pool, and perhaps makes unnoticeably minor edits (add a space, remove a comma) to a whole bunch of other comments. Basically masking the poison by hiding it in a lot of noise?
Now you’re talkin .
Intra comment edit threshold would be fun to explore