• 1 Post
  • 42 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle

  • I avoid any company that requires a software test before the interview.

    I worked for a company that introduced them after I joined, I collected evidence all of the companies top performers wouldn’t have joined since we all had multiple offers and having to do the test would put people off applying. The scores from it didn’t correlate with interview results so it was being ignored by everyone. Still took 2 years to get rid of it.

    The best place used STAR (Situation Task Action Result) based interviews. The goal was to ask questions until you got 2 stars.

    I thought these were great because it was more varied and conversational but there was a comparable consistency accross interviewers.

    You would inevitably get references to past work and you switch to asking a few questions about that. Since it was around a situation you would get more complete technical explanations (e.g. on that project I wrote an X and Y was really challenging because of Z).

    I loved asking “Tell me about something your really proud off”. Even a nervous junior would start opening up after that question.

    After an hour interview you would end up with enough information you could compare them against the company gradings (junior, senior, etc…).

    This was important because it changed the attitude of the interview. It wasn’t a case of if the candidate would be a good senior dev for project X, but an assessment of the candidate. If they came out as a lead and we had a lead role, lets offer them that.


  • If you signup to social media it will pester you for your email contacts, location and hobbies/interests.

    Building a signup wizard to use that information to select a instance would seemto be the best approach.

    The contacts would let you know what instance most of your friends are located (e.g. look up email addresses).

    Topic specific instance, can provide a hobby/interests selection section.

    Lastly the location would let you choose a country specific general instance.

    It would help push decentralisation but instead of providing choice your asking questions the user is used to being asked.



  • Nvidia drivers don’t tend to be as performant under linux.

    With AMD instead of using the AMD VLK driver, you would use the RADV (developed largely by valve). Which petforms better.

    Every AMD card under linux supports OpenCL (the driver is more based on graphics card architecture) and you install it very easily. Googling it with windows found pages of errors and missing support.

    Blender supports OpenCL. I bet the 2x improvement is Blender being able to ofload rendering to the AMD graphics card.

    Also this represents the biggest headache in Linux, lots of gamers insist they can only use Nvidia cards. Nvidia treats linux as an afterthought as best or deliberately sabotages things at worse.

    AMD embraced open source and so Linux land is much nicer on AMD (and to a less extent Intel).

    The results here will probably be a DxVK quirk, lots of “Nvidia optimised” games have game engines doing weird things and the Nvidia driver compensates. DxVK has been identifying that to produce “good” vulkan calls.


  • Mint was a reaction to Gnome 3, the unique workflow upset a lot of people and the people behind Mint decided to build Cinnamon desktop (its Gnome 3 made to look/work like Gnome 2). They needed a distribution to build/test their work and so based a distribution off of Ubuntu and called it Mint.

    As a bit of explanation, there are only a few projects which attempt to build an entire linux distribution from scratch. This involves finding code from thousands of sources, work out packaging, etc… We call these ‘base’ distributions, Debian is the base distribution for Ubuntu, Ubuntu is the base distribution for Mint.

    Ubuntu tends to be slightly ahead of Debian in the software versions it uses and automatically enables the ‘non-free’ repositories. Ubuntu tends to push some Canonical specific things like Snaps (which everyone hates)

    I believe Mint rolls the Canonical specific things out of Ubuntu and you get the latest version of Cinnamon.

    Its all a bit…


  • If its for work I would suggest picking a “stable” distribution like Debian, Kubuntu or OpenSuse.

    A lot of people recommend Arch or Fedora but the focus of those is getting the very latest releases, which increases your chance of stuff breaking.

    A lot of people will suggest niche distributions, those can be great for specific needs but generally you will always find Debian/Ubuntu/RHEL support for commercial apps.

    I would also suggest looking at the KDE Desktop, many distributions default to Gnome but it is unique in how it works, KDE (or XFCE) will provide a desktop similar to Windows 11.

    Lastly I would suggest looking at Crossover Linux by Codeweavers.

    Linux has something called WINE, its an attempt to implement the Windows 95 - 11 API’s so windows applications can run on linux.

    WINE is how the Steam Deck/Linux is able to play Windows games. Valve embedded it into Steam and called it “Proton”.

    WINE is primarily developed by Codeweavers and they provide the Crossover application that makes setting up and running a Windows application really easy.

    People will mention Lutris but that has a far higher learning curve.

    There is an application database so you can see in advance if your applications would work: https://appdb.winehq.org/



  • Python’s public API changes subtly, so minor changes in Python version can lead to massive changes in the version of dependencies you use.

    A few years ago we developed a script to update Cassandra on Python 2.7.Y. Production environment used Python 2.7.X (it was 5 patch releases earlier).

    This completely changed the cassandra library version. We had to go back 15 patch releases which annoying resulting in a breaking change in the Cassandra libraries API and wouldn’t work on the dev environments Python.

    Python 3 hasn’t solved this, 2 years ago I was asked to look at a number of Machine Learning projects running in docker. Upgrading Python from 3.4 to 3.8 had a huge effect on dependencies and figuring out the right combination was a huge pain.

    This is a solved problem in Java, Node.js has the same weakness but their changes to language spec are additive so old code runs on new releases (just not the inverse). Ruby has exactly the same issues as Python



  • SpaceX are launching 26-52 satellites at a time and have sustained 3 launches a week for most of the year.

    The satellites are in a Low Earth Orbit, without constant thrust, atmospheric drag will force them to re enter earths atmosphere within a few months. This means they aren’t adding to junk in space.

    Unlike Nasa, ULA, Arriannespace, RoscosMos, etc… SpaceX have always performed 2nd Stage Deorbit burns, so they aren’t adding to Space junk by launching either.

    The Low Earth Orbit is to ensure low latency and the need for constant thrust means the satellites have a short life expectancy by design. That is why SpaceX fought to keep the satellites as cheap as possible (e.g. $250k)

    First stage booster reuse and fairing reuse means the majority of the launch cost is the second stage ($15 million).

    The whole lot is privately funded



  • I have always had 1 question.

    In voyager we see the Borg have thousands of ships of varying sizes and control a vast area of space. Voyager is able to take down spheres and small cubes.

    Yet in Wolf 359 a single cube attacks and destroys hundreds of star fleet vessels. If a single cube is able to have that level of effect why didn’t the borg commit a larger fleet?

    You have the same issue in First Contact, they only commit 1 cube.

    Considering how difficult the federation finds holding them back, attacking with 3-6 cubes would seemto assure victory


  • The issue is end to end encryption.

    The law change requires messaging applications to be able to provide messages between people using their service.

    In the 00’s, messaging applications would have a secure connection between themselves and person A and anouther secure connection between themselves and Person B.

    Person A would encrypt the message, send it to the service, who would decrypt it, open a connection to Person B, encrypt the message and send to Person B.

    So if the police got a warrent for communications of Person B (say the police think the person is involved in human trafficking), then the messaging service could provide all messages sent to Person B.

    Message services have taken themselves out of the loop, Person A now encrypts the message and sends directly to Person B. So the police appear with a warrent and the message service shrugs its shoulders since it hasno means to get the data.

    The law effectively requires messaging services to design the apps/service so they can comply with a warrent.

    The issue is less encryption and more the balance between your right to privacy and states right to intrude.

    This is why banks aren’t upset, they aren’t talking about back dooring encryption and bank encryption is between you and the bank so they don’t have to do/say anything.


  • Similar to most navies.

    Engineering’s workload won’t really change, they’ll do certain types of maintenance.

    Most navies don’t have command staff on the bridge full time. There would be a watch officer who is fairly junior learning how to operate the ship so the down time is an opportunity for them to grow and learn.

    Most navies seperate the captain and first officer, with the first officer involved in running the ship and the captain running the big picture.

    So you would expect the first officer to spend the time checking on every department to ensure they are up to standard.

    That would mean department heads would be running drills or bringing equipment down for maintenance so its ready.

    The captain would likely be planning and thinking through the encounter.

    For any free time senior officers have there is probably a mountain of reports (personnel, ship, intelligence, etc…) to read and keep tabs on.


  • Do not mix tabs and spaces.

    Its impossible to automate checking that tabs were only used for indentation and spacing for precise alignment. So you then take on a burden of manually checking

    You end up with the issue where someone didn’t realise and space idented or anouther person used tabs for precise alignment and people forget to check the whitespace characters in review and it ends up going inconsistent and becoming a huge pile of technical debt to fix.

    Use only one, you can automate enforcement and ensure the code renders consistency.



  • Years ago there was no way to share IDE settings between developers.

    You ended up with some developers choosing a tab width of 2 spaces, some choosing 4 spaces and as there was no linting enforcement some people using 2-4 spaces depending on their IDE settings.

    This resulted in an unreadable mess as stuff was idented to all sorts of random levels.

    It doesn’t matter if you use tabs or spaces as long as only one type is consistently used within a project.

    Spaces tends to win because inevitably there are times you need to use spaces and so its difficult to ensure a project only uses tabs for identation.

    IDE’s support converting tabs into spaces based on tab width and code formatting will ensure correct indentation. You can now have centralised IDE settings so everyone gets the same setup.

    Honestly 99% of people don’t care about formatting (they only care when consistency isn’t enforced and code is hard to read), there is always one person who wants a 60 charracter line width or only tabs or double new lined parathensis. Who then sucks up huge amounts of the team time arguing their thing is a must while they code in emacs, unlike the rest of the team using an actual ide.


  • I am actually arguing for a stable ABI.

    The few times I have had to compile out of tree drivers for the linux kernel its usually failed because the ABI has changed.

    Each time I have looked into it, I found code churn, e.g. changing an enum to a char (or the other way) or messing with the parameter order.

    If I was empire of the world, the linux kernel would be built using conan.io, with device trees pulling down drivers as dependencies.

    The Linux ABI Headers would move out into their own seperately managed project. Which is released and managed at its own rate. Subsystem maintainers would have to raise pull requests to change the ABI and changing a parameter from enum to char because you prefer chars wouldn’t be good enough.

    Each subsystem would be its own “project” and with a logical repository structure (e.g. intel and amd gpu drivers don’t share code so why would they be in the same repo?) And built against the appropriate ABI version with each repository released at its own rate.

    Unsupported drivers would then be forked into their own repositories. This simplifies depreciation since its external to the supported drivers and doesn’t need to be refactored or maintained. If distributions can build them and want to include the driver they can.

    Linus job would be to maintain the core kernel, device trees and ABI projects and provide a bill of materials for a selection of linux kernel/abi/drivers version which are supported.

    Lastly since every driver is a descrete buildable component, it would make it far easier for distributions to check if the driver is compatible (e.g. change a dependency version and build) with the kernel ABI they are using and provide new drivers with the build.

    None of this will ever happen. C/C++ developers loath dependency management and people can ve stringly attached to mono repos for some reason.


  • The linux kernel is very old school in how it is run and originally a big part of the DevSecOps movement was removing a lot of manual overhead.

    Moving on to something like Gitea (codeberg) would give you a better diff view and is quicker/easier than posting a patch to a mailing list.

    The branching model of the kernel is something people write up on paper that looks great (much like Gitflow) but is really time consuming to manage. Moving to feature branch workflow and creating a release branches as part of the release process allows a ton of things to be automated and simplified.

    Similarly file systems aren’t really device specific, so you could build system tests for them for benchmarking and standard use cases.

    Setting up a CI to perform smoke testing and linting, is fairly standard.

    Its really easy to setup a CI to trigger when a new branch/pr is created/updated, this means review becomes reduced to checking business logic which makes reviews really quick and easy.

    Similarly moving on to a decent issue tracker, Jira’s support for Epic’s/stories/tasks/capabilities and its linking ability is a huge simplifier for long term planning.

    You can do things like define OKR’s and then attach Epics to them and Stories/tasks to epics which lets you track progress to goals.

    You can use issues the way the linux community currently uses mailing lists.

    Combined with a Kanban board for tracking, progress of tickets. You remove a ton of pain.

    Although open source issue trackers are missing the key productivity enablers of Jira, which makes these improvements hard to realise.

    The issue is people, the linux kernel maintainers have been working one way for decades. Getting them to adopt new tools will be heavily resisted, same with changing how they work.

    Its like everyone outside, knows a breaking the ABI definition from the sub system implementation would create a far more stable ABI which would solve a bunch of issues and allow change when needed, except no one in the kernel will entertain the idea.


  • Maven has unit and integration test phases and there are a multitude of plugins designed to hook into those phases but there are constraints by design.

    Trying to hook everything into the build management system is a source of technical debt, your using a tool for something it wasn’t designed.

    I would look at what makes sense within the build management system and what makes sense in a CI pipeline.

    CI tools have different DSL and usually provide a means to manage environments. Certain integration and system level tests are best performed there.

    For instance I keep system tests as a seperate managed project. The project can be executed from developer machines for local builds but I also create a small build pipeline to build the project, deploy it and run the system tests against it triggered by pull requests.

    This is why I say the build management system doesn’t really change, because you should treat everything as descrete standalone components.

    The Parent POM gets updates once every six months, the basic build verification CI pipeline only changes to the latest language release, etc…

    Projects which try to embed gitflow into a pom or integrate CD into the gradle file are the unbuildable messes I get asked to fix.