cultural reviewer and dabbler in stylistic premonitions

  • 91 Posts
  • 200 Comments
Joined 2 years ago
cake
Cake day: January 17th, 2022

help-circle




















  • Since he doesn’t mention it in his ‘fantastic’ reporting, OpenSSH 9.6 was released Monday that will patch this attack.

    I am tempted to delete this post just for the article’s stupid clickbait headline, but it still will probably cause some people to go update their OpenSSH installs, so… meh.

    Anyone who actually wants to know details of the vulnerability should read the website about it which is obviously much better than this article.

    Also, since he doesn’t mention it, if on the Internet, the MITM would have to be installed at both end points (client side and server side) to be effective without the patch.

    Huh? No. The attacker doesn’t need to be in two places or even near either end per se, they could be located at any fully on-path position between the client and server.



  • If your local censor is not effectively blocking Tor, then you can just use Tor Browser to access lemmy.ml’s normal address via an exit node. Onion services don’t particularly help with circumventing censorship that is performed by the ISP of the user.

    Onion services are useful for removing load from the exit nodes (since connections to them don’t need to go through exit nodes) and for having a self-authenticating address that doesn’t immediately reveal the location of the server. However, the location-hiding properties of onion services are not actually very strong at all (note that they used to be called hidden services and mostly aren’t anymore) and should not be relied upon. There are many adversaries who can locate a “hidden” service in a relatively short period of time. So, onion services are only potentially useful for resistance of censorship at the server’s location in the short-term and/or against weak adversaries.


  • I don’t agree with all of your conclusions here, but I think it is important to note another problem:

    Mammoth’s AGPL 3.0 license is currently incompatible with Apple’s AppStore because Apple imposes restrictions which are explicitly forbidden by GPLv3 (specifically, the paragraphs in the license about “installation information”).

    So, while the source code is released under this license, the binaries that Mammoth distributes via Apple are not under a free software license at all. Recipients of the source code are allowed to distribute it (and their own modified versions) under GPLv3 only, which means not on Apple’s App Store (which is the only place most iOS users get software).

    This may be an oversight, or may be intentional. Other projects like Signal messenger have for years been using the GPLv3-iOS incompatibility to appear to be free software while actually maintaining a monopoly on the right to distribute binaries to iPhone users.

    See NextCloud’s COPYING.iOS for an example of how to release an iOS app under GPLv3 in a way that does not restrict that right to a single entity.












  • it sounds like you’re formulating a conspiracy that implicates Signal themselves, claiming you believe they are being technically correct.

    No, again, I think Signal employees sincerely believe that nobody is logging Signal metadata.

    If I’m misreading your argument, please correct me. But there is a fine line between Just Asking Questions to promote a conspiracy theory, and just asking questions authentically, and it’s often hard to tell the difference.

    There isn’t anything theoretical in what I’m saying, except for the implication that Signal’s financial backing might be related to its surveillance-friendly architecture.

    You can use words like “conspiracy” to dismiss the point, but tell me: if you’re completely confident that the adversaries you want to protect against are unable to compromise the server infrastructure, why would you need e2e encryption at all?

    Because I’m not 100% confident, like most people under a broad range of reasonable threat models.

    Good answer. So, when analyzing the security properties of thing that purports to protect against a compromised server, shouldn’t we logically consider the case that the server is compromised? And how does Sealed Sender fare in that case? Do you not see how it is performative cryptography?

    Precisely. I think the design is good, but it’s a single entity controlling basically all the servers, which means that not only can they effectively be considered a single server, but using your argument they can effectively be assumed to be collecting the exact same metadata

    Why do you think the default configured servers are “basically all the servers”? The way SimpleX works, if you’re using one of the default servers, and I am not, and we add each other as contacts, you probably wouldn’t even notice. And then we’d be each sending and receiving to eachother using servers operated by different entities. But again, even if we are both using the same default server, this is not “the exact same metadata” as Signal because there are no phone numbers involved.


  • Did you read my other comment which is linked to from the one you’re replying to?

    The parts of this reply that are in italics are direct quotes from it.

    First, we have to assume a worst case scenario, where Signal not only logs all IP addresses (despite what multiple court cases have shown us), but that they do it both secretly and intentionally in order to store that data. Your theory already requires serious collusion between that company and the government, with no whistleblowers.

    No, you don’t need to assume that Signal does anything. As I said, Signal says that they don’t retain any of this metadata, and I think it is likely that Signal employees are sincere when they say that. But someone with the right access at Signal’s ISP (Amazon) and anybody who can coerce, compel, or otherwise compromise those people (or their computers) can log it without Signal’s cooperation or knowledge.

    And if that was the case, they wouldn’t want Sealed Sender actually functioning. So we also have to buy into an additional conspiracy that they added it as a red herring. What does your theory say about this: did they know they could work around it, or is it secretly flawed?

    I think sealed sender does what it says it does, which is let you send messages without explicitly telling the server who the message is from. But that doesn’t change the fact that you’re connecting to their servers from the same IP address to send and receive and you need to identify yourself (with your phone number) to receive, so, the identity of the sender can be easily inferred if the server (or its operator) wants to correlate the information available to it.

    Sealed sender only makes sense if the server is honest and doesn’t link the ‘anonymous’ sender with the non-anonymous receiver activities coming from the same IP address. But, if the server is honest, then a “no logging” policy would accomplish the same thing. Sealed sender is performative cryptography.

    You can use words like “conspiracy” to dismiss the point, but tell me: if you’re completely confident that the adversaries you want to protect against are unable to compromise the server infrastructure, why would you need e2e encryption at all?

    How about the ease of which somebody could use Signal with a VPN? That defeats half of your metadata complaints.

    A VPN hides your actual IP address from the server, but that is not the kind of metadata I’m talking about. I’m talking about who (which phone numbers, since that is Signal’s identifier) is talking to who, and when. A VPN only helps with this problem when there are other Signal users coming from the same VPN IP address at the same time as you, and then it only helps a little. It could help if you used a VPN for sending but not receiving, or vice-versa, or used different VPNs for each, but, Signal doesn’t do that (and if they did they’d probably run the ‘different’ VPNs themselves on cloud services anyway).

    But if you were being fair, you would have to level the same accusation against every other messaging app, and the only ones I can think of have worse encryption (Session) or explicitly have servers under unilateral control (SimpleX) or fare far worse (Matrix, Threema, Wire, etc).

    It’s ironic that the five things you picked actually all have the same major advantage over Signal (and WhatsApp, and Telegram): those five actually all are usable without a phone number! They each have their own problems, but at least it’s possible to use them all without a phone number!

    What do you mean about SimpleX having servers under unilateral control? The software comes with several of the author’s servers baked in which you use by default, but I think it is easy to use a different one or to run your own. And a cool thing about SimpleX is that each direction of a conversation is on a different server, so within a single conversation you are often not sending and receiving from the same server, which is the opposite of the metadata centralization of Signal’s design. (Of course, when all of the servers involved are run by a single entity, which I think is probably the case for most SimpleX users today, that entity can still observe who is talking to who. But the protocol is explicitly designed to decentralize metadata instead of to centralize it. And it doesn’t use phone numbers, much less require them.)