• 53 Posts
  • 190 Comments
Joined 11 months ago
cake
Cake day: July 29th, 2023

help-circle




















  • Perhaps I’m being dense and coffee hasn’t kicked in yet, but I fail to see where is this new computing paradigm that’s mentioned in the title.

    From their inception, computers have been used to plug in sensors, collect their values, and use them to compute stuff and things. For decades each and every single consumer-grade laptop has adaptive active cooling, which means spinning fans and throttling down CPUs when sensors report values over a threshold. One of the most basic aspects of programming is checking if a memory allocation was successful, and otherwise handle an out-of-memory scenario. Updating app states when network connections go up or down is also a very basic feature. Concepts like retries, jitter, exponential back off have become basic features provided by dedicated modules. From the start Docker provided support for health checks, which is basically am endpoint designed to be probed periodically. There are also canary tests to check if services are reachable and usable.

    These exist for decades. This stuff has been done in production software since the 90s.

    Where’s the novelty?


  • Having said this, I’d say that OFFSET+LIMIT should never be used, not because of performance concerns, but because it is fundamentally broken.

    If you have rows being posted frequently into a table and you try to go through them with OFFSET+LIMIT pagination, the output from a pagination will not correspond to the table’s contents. Fo each row that is appended to the table, your next pagination will include a repeated element from the tail of the previous pagination request.

    Things get even messier once you try to page back your history, as now both the tip and the tail of each page will be messed up.

    Cursor+based navigation ensures these conflicts do not happen, and also has the nice trait of being easily cacheable.





  • It’s usually easier imo to separate them into different processes (…)

    I don’t think your comment applies to the discussion. One of the thread pools mentioned is for IO-bound applications, which means things like sending HTTP requests.

    Even if somehow you think it’s a good idea to move this class of tasks to a separate process, you will still have a very specific thread pool that can easily overcommit because most tasks end up idling while waiting for data to arrive.

    The main take is that there are at least two classes of background tasks that have very distinct requirements and usage patterns. It’s important to handle both in separate thread pools which act differently. Some frameworks already do that for you out of the box. Nevertheless it’s important to be mindful of how distinct their usage is.


  • I’d love to see benchmarks testing the two, and out of curiosity also including compressed JSON docs to take into account the impact of payload volume.

    Nevertheless, I think there are two major features that differentiate protobuff and fleece, which are:

    • fleece is implemented as an appendable data structure, which might open the door to some usages,
    • protobuf supports more data types than the ones supported by JSON, which may be a good or bad thing depending on the perspective.

    In the end, if the world survived with XML for so long, I’d guess we can live with minor gains just as easily.


  • We have a client which is MAD cause the project is riddled with bugs, but the solution somehow is paying more attention. Except that it clearly isn’t feasible to pay more attention when you have to check, recheck and check again the same thing over and over…

    By definition, automated testing means paying more attention, and doing it so well that the process is automated.

    They say it’s a waste cause you can’t catch UI (…)

    Show them a working test that’s catching UI bugs. It’s hard to argue against facts.

    but they somehow think they are smarter than google or any other small or big company that do write test

    Don’t sell a solution because others are doing it. Sell a solution because it’s a solution to the problem they are experiencing, and it’s in their best interests to solve it. Appeals to authority don’t work on everyone.




  • Eduards Sizovs, the DevTernity organizer accused of making up fake female speakers, felt it was the right PR move to post this message on Twitter:

    https://twitter.com/eduardsi/status/1728447955122921745

    So I’ve been called out (and canceled?) by listing a person on my conference’s website (who never actually made it to the final program). JUST A RANDOM PERSON ON THE CONFERENCE WEBSITE canceled all the good work I’ve been doing for 15+ years. All focus on that.

    I said it was a mistake, a bug that turned out to be a feature. I even fixed that on my website! We’re cool? Nooooo, we want blood! Let’s cancel this SINNER!

    The amount of hate and lynching I keep receiving is as if I would have scammed or killed someone. But I won’t defend myself because I don’t feel guilty. I did nothing terrible that I need to apologize for. The conference has always delivered on its promise. It’s an awesome, inclusive, event. And yes, I like Uncle Bob’s talks. They’re damn good.

    When the mob comes for you, you’re alone. So, let it be. I’ll keep doing a great conference. With all speakers, half the speakers, or I’ll be speaking alone on all tracks and lose my voice. But the event will be a blast. Like always. I’ll die while doing great work. But the mob won’t kill me.

    I don’t think that tone-deaf is the right word for this.


  • From the article:

    “To spell it out why this conference generated fake women speakers,” Orosz alleges, it was “because the organizer wants big names and it probably seemed like an easy way to address their diversity concerns. Incredibly lazy.”

    How hard is it for these organizers to actually reach out to women developers and extend an invite to talk about any topic they are interested in? In the very least, there are tons of high-profile bloggers who are vocal about things and stuff. Even though women are severely outnumbered, you almost need to go way out of your way to avoid actually extending an invite to a woman in the field.





  • They used it because it was an established term

    My graph theory is a bit fuzzy but I think that the definition of a branch in a directed graph corresponds to the path between two nodes/vertices. This means that by definition any path from the root node to any vertex is itself a branch.

    I don’t think Git invented this concept, nor did any other version control system.

    I know that “branch” helps intuitively and visually when it’s actually an offshoot with one root and a dangling tip, like an actual tree branch…

    I think that your personal definition of a branch doesn’t correspond to what graph theory calls a branch. Anyone please correct me if I’m wrong.


  • If a library or framework requires boilerplate code it’s a bad library or a bad framework.

    I think this take is uneducated and can only come from a place of inexperience. There’s plenty of usecases that naturally lead to boilerplate code, such as initialization/termination, setting up/tearing down, configuration, etc. This is not a code smell, it’s just the natural reflection of having to integrate third-party code into your projects.


  • From the article:

    By library, I mean any software that can be run by the user: shared objects, modules, servers, command line utilities, and others. By service, I mean any software which the user can’t run on their own; anything which depends (usually through an API) on a service provider for its functionality.

    It looks like the blogger took a page out of Humpty Dumpty’s playbook and tried to repurpose familiar keywords that refer to widely established concepts by assigning them entirely different meanings that are used by no one except the author. I’d also go as far as stating these redefinitions make no sense at all.

    Perhaps the blogger might even have a point to make, but stumbling upon these semantics screwups is a major turndown, and in my case led me to just stop reading the blog post on the spot.