![](/static/66c60d9f/assets/icons/icon-96x96.png)
![](https://programming.dev/pictrs/image/8140dda6-9512-4297-ac17-d303638c90a6.png)
I couldn’t care less about another browser without clear benefits over Firefox, but I think this is the main takeaway - Swift is usable for production development on Windows these days apparently, which is awesome.
I couldn’t care less about another browser without clear benefits over Firefox, but I think this is the main takeaway - Swift is usable for production development on Windows these days apparently, which is awesome.
Looks like the original post (short article?) was in 2010. Thinking of how C++ has evolved since then, I have a hard time believing anyone but the most involved and dedicated C++ developers can really understand all of how C++ works. Heck, even the compilers don’t seem to have a full grasp of C++ these days (at least with regards to modules).
Some friends and I once went to visit one of their families for spring break back in college. We made the mistake of starting a server.
The spring break ended. We left the room maybe twice a day to eat food, around 7pm and 4am. The factory grew. I think there was a family there, I can’t remember though.
For very simple backends, it’s very unlikely you’ll get any significant number of bugs with an experienced team, and if performance isn’t really a concern, then Rust being faster isn’t really relevant. For anything more complex than a simple backend, I’d agree that Rust becomes a lot more appealing, but if you just need to throw together something that handles user profiles or something in a very simple manner, it really doesn’t make a difference what language you do it in as long as you write a few tests to make sure everything works.
Wouldn’t be any other laptop :P
they actually have to reference the function by string name.
This is true of a lot of the opt-in language features though, isn’t it? For example, you can just make an .Add
method on any IEnumerable
type and get collection initializer syntax supported for it, even as an extension method. The same works for Dispose
on ref structs I believe, and I remember there being a few other places where this was true (GetAwaiter
I think?).
I think one of the things holding back some of the more impactful features we could see in C# is the need to also update the CLR in many cases to handle things like new kinds of types, new kinds of expressions, etc. TypeScript has the benefit of being executed by a dynamic runtime, but C#'s runtime is unfortunately statically typed, meaning it also needs to be updated with the language. It’s also used by multiple languages, for what it’s worth.
That being said, if they redirected some of their efforts towards improving the CLR as well, I think they could put out all the cool features they’ve mostly sidelined, like DUs and some form of their extension everything proposal.
I essentially got Starfield for free (bought a laptop and it came with a code). For $0 it was worth every dollar spent, but I do feel bad for the people who pre-ordered.
I can’t imagine why someone would pre-order a game like this one though, these devs don’t exactly have a great track history. At least for the people who pre-ordered Starfield, we know Bethesda will at minimum deliver a game lol.
This highly depends on what it is you’re trying to build. If it’s a simple CRUD backend + database, then there’s really no reason to use Rust except if you just want to. If it’s doing heavy computation, then you’d want to benchmark both and see if any potential gains by writing it in Rust are worth the effort of using Rust over Node.js.
Practically speaking, it’s really uncommon to need to write a backend in Rust over something like JS or Python. Usually that’s only needed for high throughput services (like Cloudflare’s proxy service which handles trillions of daily requests), or ones performing computationally expensive work they can’t offload to another service (after benchmarking first of course).
Even if you take my spending (which was in the hundreds) on Warframe into account, it was still worth the thousands of hours I put into the game. It’s really just a matter of whether you enjoy the game enough to justify the spending.
It’s been years since I last played, but back then you could tell the devs genuinely loved their game and were passionate to build it up. I hope the same is true today, and considering the game is still actively developed, I’d imagine it is.
Honestly, I have no issue with exclusives as long as they get released on another platform after a while. Sony’s been good about releasing a lot of the hits on PC after a couple years, so aside from missing the initial hype, I haven’t really missed out being PC only.
Exclusives that stay exclusive indefinitely, I basically treat those games as if they don’t exist. I don’t have anywhere to put a PS5, nor a desire to get one really, and as far as I know they make most of their money from game sales anyway. I don’t see much value in them locking people out of their games completely.
Shadow of Mordor took me two attempts to get into it, but I’m glad I went back for the second attempt because the game ended up surprising me with how excellent the storytelling and gameplay was.
Shadow of War was pretty good as well, but the online stuff was starting to kill it for me and it was clearly designed with microtransactions in mind (even though I believe they removed them?). Still a fun game though, and didn’t feel like a waste of money especially on sale.
A new entry in the series could be a lot of fun if they held true to what made Shadow of Mordor such a fun game, in my opinion.
The early entries in the Ratchet & Clank series. I believe they remastered the first three at some point for PS3, but I’d love to see them updated again to today’s definition of HD and released on PC as well. Unfortunately I only played the first four games back on PS2 since I switched to full PC gaming since then, but those games were all a lot of fun. After playing the recent PC port of Rift Apart, it made me realize how much I missed playing the series, and if they remastered and ported the entire series to PC, I’d easily lose an entire paycheck to that.
Yep, bias exists everywhere. There’s no avoiding it. Reddit does have the benefit that biases tend to change from sub to sub though. Lemmy instances that I’ve seen (not defederated ones) tend to hold the same FOSS bias, but the intensity of it varies from instance to instance.
This is one of the reasons I like what Beehaw does. Your options are upvote, no vote, and report. Anything worthy of a downvote is worthy of a report.
Agreed, feels like the vast majority of people here are FOSS enthusiasts, which isn’t a bad thing necessarily if you align with them, but definitely a bias and could put off people who genuinely don’t care about FOSS or tech in general.
Not sure about other companies, but at the one I work at, recommending a training doesn’t mean a whole lot except “this might be relevant to your work”. For example, in this case an employee expressed concerns of being discriminated against, so it makes sense to recommend training on how to identify and address those kinds of problems (even if no such situation is actually occurring) so that you’re better prepared to handle it.
The only modern language that gets it right is Swift:
print("🤦🏼♂️".count) // => 1
Minor, but I’m not sure this is as unambiguous as the article claims. It’s true that for someone “that isn’t burdened with computer internals” that this is the most obvious “length” of the string, but programmers are by definition burdened with computer internals. That’s not to say the length shouldn’t be 1 though, it’s more that the “length” field/property has a terrible name, and asking for the length of a string is a very ambiguous question to begin with.
Instead, I think a better solution is to be clear what length you’re actually referring to. For example, with Rust, the .len()
method documents itself as the number of bytes in the string and warns that it may not be what you’re interested in. Similarly, .chars()
clarifies that it iterates over Unicode Scalar Values, and not grapheme clusters (and that grapheme clusters are unfortunately not handled by the standard library).
For most high level applications, I think you generally do want to work with grapheme clusters, and what Swift does makes sense (assuming you can also iterate over the individual bytes somehow for low level operations). As long as it is clearly documented what your “length” refers to, and assuming the other lengths can be calculated, I think any reasonably useful length is valid.
The article they link in that section does cover a lot of the nuances between them, and is a great read for more discussion around what the length should be.
Edit: I should also add that Korean, for example, adds some additional complexity to it. For example, what’s the string length of 각? Is it 1, because it visually consumes a single “space”? Or is it 3 because it’s 3 letters (ㄱ, ㅏ, ㄱ)? Swift says the length is 1.
I doubt it’s the case, but if they remastered the PS2 R&C games and sold them as a collection on PC, my wallet would hate me.