• 28 Posts
  • 66 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle


  • fiasco@possumpat.iotoSelfhosted@lemmy.worldHow much swap?
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    4
    ·
    edit-2
    1 year ago

    I think it’s better to think about what swap is, and the right answer might well be zero. If you try to allocate memory and there isn’t any available, then existing stuff in memory is transferred to the swap file/partition. This is incredibly slow. If there isn’t enough memory or swap available, then at least one process (one hopes the one that made the unfulfillable request for memory) is killed.

    If you ever do start swapping memory to disk, your computer will grind to a halt.

    Maybe someone will disagree with me, and if someone does I’m curious why, but unless you’re in some sort of very high memory utilization situation, processes being killed is probably easier to deal with than the huge delays caused by swapping.

    Edit: Didn’t notice what community this was. Since it’s a webserver, the answer requires some understanding of utilization. You might want to look into swap files rather than swap partitions, since I’m pretty sure they’re easier to resize as conditions change.



  • Actually, I’d like to expand on this a bit. Setting aside the question of whether storytelling itself is necessary, though I believe it is, I think part of why so much modern writing is so soulless is the focus on getting from point A to point B. “Story beats,” they call them. Or we might call this the Pixar Algorithm.

    The software tooling around computer graphics is such that any major studio will produce stunning visuals. Whether they nail visual design or cinematography is still a question, but the fidelity of the graphics will be great. Do something tried and tested, and you’ll get a Marvel movie.

    Writing is something else, though, because writing well requires having something to say. It seems like nobody in Hollywood has anything to say anymore, so they try and paper over that fact with “cleverness.” But they aren’t very clever either.

    This is a round about way of saying, I think the “unnecessary” stuff, the stuff that doesn’t drive the story to the next beat, is where most of the soul of a story resides. The reason it’s so important to have something to say is, that gives you some direction on how to add relevance to the unnecessary parts. So all this stuff is tied pretty tightly together.

    This is also why my commentary on “Tomorrow Is Yesterday” was mostly talking about other, better episodes.








  • Well… They are of course right about the fact that these sorts of decentralized systems don’t have a lot of privacy. It’s necessary to make most everything available to most everyone to be able to keep the system synchronized.

    So stuff like Meta being able to profile you based on statistical demographic analysis basically can’t be stopped.

    It seems to me, the dangers are more like…

    Meta will do the usual rage baiting on its own servers, which means that their upvotes will reflect that, and those posts will be pushed to federated instances. This will almost certainly pollute the system with tons of stupid bullshit, and will basically necessitate defederating.

    It’ll bring in a ton of, pardon the word, normies. Facebook became unsavory when your racist uncle started posting terrible memes, and his memes will be pushed to your Mastodon feed. This will basically necessitate defederating.

    Your posts will be pushed to Meta servers, which means your racist uncle will start commenting on them. This will basically necessitate defederating.

    Then yes there’s EEE danger. Hopefully the Mastodon developers will resist that. On the plus side, if Meta does try to invade Lemmy, I’m pretty confident the Lemmy developers won’t give them the time of day.




  • It goes along with how they’ve stopped calling it a user interface and started calling it a user experience. Interface implies the computer is a tool that you use to do things, while experience implies that the things you can do are ready made according to, basically, usage scripts that were mapped out by designers and programmers.

    No sane person would talk about a user’s experience with a socket wrench, and that’s how you know socket wrenches are still useful.



  • My personal feeling is that first contact retcons are signs of lazy writing. I feel the same way about the NX-01 being boarded by ferengi. Just come up with your own aliens, that’s part and parcel for Trek.

    I’ve only seen two episodes of Strange New Worlds, basically in isolation. I saw one episode shitting on “Arena,” and one episode shitting on “Balance of Terror.”

    By this I mean, “Arena” is about understanding that aliens get to have territorial sovereignty too, and that the gorn weren’t exactly wrong, even as they weren’t exactly right. Spock mentions that right and wrong will have to be sorted out by diplomats. Not exactly great news for the dead, but what can you do?

    Meanwhile, the anti-Arena episode I happened to see of Strange New Worlds, everyone was champing at the bit to disintegrate some lizards.

    I’m not even opposed to doing an anti-Arena episode, I mean, that’s “Siege of AR-558.” But if you’re gonna do that, you have to acknowledge what a tragedy that is.


  • I suppose I want to remark on why I’m contrasting the metrons, these strange Greek god creatures, with Daniels, the time cop from the 31st Century.

    The metrons condemn both the humans and the gorn for their barbarism. But what do you do in the face of invaders? Or more broadly, what do you do under threat of violence? Do you meet it with more violence? Do you lay down and die?

    The metrons appear to have a level of technological superiority that makes these questions irrelevant. And just… how precious.

    “Just become gods” doesn’t answer the question. Starfleet and the Federation put a lot of work into making peace, and a lot of work into making war. As it goes in the real world. Hence, I can’t take the metrons any more seriously than I can take Daniels, and I don’t take Daniels seriously at all.





  • I suppose I disagree with the formulation of the argument. The entscheidungsproblem and the halting problem are limitations on formal analysis. It isn’t relevant to talk about either of them in terms of “solving them,” that’s why we use the term undecidable. The halting problem asks, in modern terms—

    Given a computer program and a set of inputs to it, can you write a second computer program that decides whether the input program halts (i.e., finishes running)?

    The answer to that question is no. In limited terms, this tells you something fundamental about the capabilities of Turing machines and lambda calculus; in general terms, this tells you something deeply important about formal analysis. This all started with the question—

    Can you create a formal process for deciding whether a proposition, given an axiomatic system in first-order logic, is always true?

    The answer to this question is also no. Digital computers were devised as a means of specifying a formal process for solving logic problems, so the undecidability of the entscheidungsproblem was proven through the undecidability of the halting problem. This is why there are still open logic problems despite the invention of digital computers, and despite how many flops a modern supercomputer can pull off.

    We don’t use formal process for most of the things we do. And when we do try to use formal process for ourselves, it turns into a nightmare called civil and criminal law. The inadequacies of those formal processes are why we have a massive judicial system, and why the whole thing has devolved into a circus. Importantly, the inherent informality of law in practice is why we have so many lawyers, and why they can get away with charging so much.

    As for whether it’s necessary to be able to write a computer program that can effectively analyze computer programs, to be able to write a computer program that can effectively write computer programs, consider… Even the loosey goosey horseshit called “deep learning” is based on error functions. If you can’t compute how far away you are from your target, then you’ve got nothing.


  • This is proof of one thing: that our brains are nothing like digital computers as laid out by Turing and Church.

    What I mean about compilers is, compiler optimizations are only valid if a particular bit of code rewriting does exactly the same thing under all conditions as what the human wrote. This is chiefly only possible if the code in question doesn’t include any branches (if, loops, function calls). A section of code with no branches is called a basic block. Rust is special because it harshly constrains the kinds of programs you can write: another consequence of the halting problem is that, in general, you can’t track pointer aliasing outside a basic block, but the Rust program constraints do make this possible. It just foists the intellectual load onto the programmer. This is also why Rust is far and away my favorite language; I respect the boldness of this play, and the benefits far outweigh the drawbacks.

    To me, general AI means a computer program having at least the same capabilities as a human. You can go further down this rabbit hole and read about the question that spawned the halting problem, called the entscheidungsproblem (decision problem) to see that AI is actually more impossible than I let on.