• 13 Posts
  • 63 Comments
Joined 1 year ago
cake
Cake day: July 12th, 2023

help-circle





  • While corporate America focuses on mainly profits, “fighting for human rights” are just empty slogan, because corporate America is already exploiting human misery for profits. For government, it’s going to be “to prevent China from becoming the dominant tech power in the developing world” that’s going to drive this sort of initiative, which most likely will have mixed results or fail miserably altogether. Chinese exports are already driving the non-elite consumer markets in the developing worlds.


  • When I forgot part of my my old password, I came up with a list of words that I possibly could have come up with and tried those. I eventually found it even if I was panicky the whole time. If I were you, I would list the words and try them in the order of probabilities.

    Un/Fortunately, BW is implemented to rate-limit password brute-forcing. I feel you about your CAPTCHA hell, and I hate their surreal sunflower CAPTCHA (maybe to make it as repulsive as possible to the hackers?).



  • Yeah, this is definitely a problem with brand new services, especially when the native app isn’t appealing. For example, I use Liftoff for Lemmy. Open-sourced✅ In official Appstore✅ Relatively transparent who the developer is✅ No special permission starting off✅ Relatively few downloads📛 .

    When a mobile app doesn’t ask for permissions, it’s definitely less nerve-racking than the more permissive desktop environments where the apps don’t have to be special to do considerable damages.





  • Elephant0991@lemmy.bleh.autoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    11 months ago

    Shoppers of Dell Australia’s website who were buying a computer would see an offer for a Dell display with a lower price next to a higher price with a strikethrough line. That suggested to shoppers that the price they’d pay for the monitor if they added it to their cart now would be lower than the monitor’s usual cost. But it turns out the strikethrough prices weren’t the typical costs. Sometimes, the lower price was actually higher than what Dell Australia typically charged.

    Don’t believe in ads, folks. If prices are important for you, do you own research.


  • Whatever happens on the inside of a robotaxi is generally visible on the outside to bystanders and other motorists, The Standard notes of the AV’s “fishbowl-like” design.

    “While [autonomous vehicles] will likely be monitored to deter passengers having sex or using drugs in them, and to prevent violence, such surveillance may be rapidly overcome, disabled or removed,” the study said. “Private [autonomous vehicles] may also be put to commercial use, as it is just a small leap to imagine Amsterdam’s Red Light District ‘on the move.’”

    Convenient meetups, plus the additional benefits for certain fetishes.

    But don’t worry, folks, we’ll take this opportunity to put even more surveillance tech in for you to keep you safe and meanwhile, perfectly maintain your privacy. 🤪






  • Those seem like questions for more research.

    I bet it’s more pernicious because it is easy to incorporate AI suggestions. If you do your own research, you may have to think a bit if the references/search results may be bad, and you still have to put the info in your own words so that you don’t offend the copyright gods. With the AI help, well, the spellings are good, the sentences are perfectly formed, the information is plausible, it’s probably not a straight-forward copy, why not just accept?


  • I am being brainwashed by AI!

    Here’s the paper: https://dl.acm.org/doi/10.1145/3544548.3581196

    Abstract

    If large language models like GPT-3 preferably produce a particular point of view, they may influence people’s opinions on an unknown scale. This study investigates whether a language-model-powered writing assistant that generates some opinions more often than others impacts what users write – and what they think. In an online experiment, we asked participants (N=1,506) to write a post discussing whether social media is good for society. Treatment group participants used a language-model-powered writing assistant configured to argue that social media is good or bad for society. Participants then completed a social media attitude survey, and independent judges (N=500) evaluated the opinions expressed in their writing. Using the opinionated language model affected the opinions expressed in participants’ writing and shifted their opinions in the subsequent attitude survey. We discuss the wider implications of our results and argue that the opinions built into AI language technologies need to be monitored and engineered more carefully.