• 1 Post
  • 24 Comments
Joined 3 months ago
cake
Cake day: August 7th, 2024

help-circle

  • At roughly 35,000 words and filled with jargon and bureaucratic terms, the document is nearly impossible to read all the way through and just as hard to understand fully.

    A section devoted to passwords injects a large helping of badly needed common sense practices that challenge common policies. An example: The new rules bar the requirement that end users periodically change their passwords. This requirement came into being decades ago when password security was poorly understood, and it was common for people to choose common names, dictionary words, and other secrets that were easily guessed.

    Since then, most services require the use of stronger passwords made up of randomly generated characters or phrases. When passwords are chosen properly, the requirement to periodically change them, typically every one to three months, can actually diminish security because the added burden incentivizes weaker passwords that are easier for people to set and remember.

    A.k.a use a password manager for most things and a couple of long complex passwords for things that a password manager wouldn’t work for (the password manager’s password, encrypted system partitions, etc). I’m assuming In just summed up 35,000 words.


  • Edit: I did a stupid. Anonym made PPA and that was part of the acquisition.

    The only thing I looked for but could not find an answer on one way or the other is if Mozilla is making any sort of profit from this system. I would guess no but actually have no idea.

    Fuck ad companies…

    Mozilla bought an ad company (Anonym) shortly after implementing PPA. Their goal appears to be to pivot their revenue plan to (in part) being an ad company.

    I’m absolutely convinced there’s a coordinated anti-Firefox astroturfing campaign going on lately.

    I cannot know for sure whether that’s true or not, but a lot of very bad decisions have happened at Mozilla over the last six months and I think they’ve been the straw that’s broken the camel’s back.










  • My theory about what happened next — which is supported by conversations I’ve had with researchers in artificial intelligence, some of whom worked on Bing — is that many of the stories about my experience with Sydney were scraped from the web and fed into other A.I. systems.

    These systems, then, learned to associate my name with the demise of a prominent chatbot. In other words, they saw me as a threat.

    LLMs predict text, they don’t have feelings or awareness. Even if a researcher did say that I call to attention the Google chatbot programmer who thought an LLM became sentient because it said so when generating text.

    Guys, my paper is sentient, it says so.

    If the AI says he’s disonhest and sensational that’s because enough people on the internet have said so that the AI considers it to be true.